From a59be76ac1f701d0e61c55d141ae508817da627c Mon Sep 17 00:00:00 2001 From: terrytangyuan Date: Mon, 11 Dec 2023 18:03:33 +0000 Subject: [PATCH] deploy: 23292c429430e71fdc7c59c89878053ee821b80d --- index.html | 1 + search/search_index.json | 2 +- sitemap.xml | 376 +++++++++++++++++++-------------------- sitemap.xml.gz | Bin 296 -> 296 bytes 4 files changed, 190 insertions(+), 189 deletions(-) diff --git a/index.html b/index.html index 053409bc52c4..2f194fd1ccb6 100644 --- a/index.html +++ b/index.html @@ -4153,6 +4153,7 @@

Community Blogs and PresentationsArgo Ansible role: Provisioning Argo Workflows on OpenShift
  • Argo Workflows vs Apache Airflow
  • CI/CD with Argo on Kubernetes
  • +
  • Distributed Machine Learning Patterns from Manning Publication
  • Running Argo Workflows Across Multiple Kubernetes Clusters
  • Open Source Model Management Roundup: Polyaxon, Argo, and Seldon
  • Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow
  • diff --git a/search/search_index.json b/search/search_index.json index c9e97c863cd1..f2e98071c4e5 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Argo Workflows \u00b6 What is Argo Workflows? \u00b6 Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). Define workflows where each step in the workflow is a container. Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG). Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes. Argo is a Cloud Native Computing Foundation (CNCF) graduated project. Use Cases \u00b6 Machine Learning pipelines Data and batch processing Infrastructure automation CI/CD Other use cases Why Argo Workflows? \u00b6 Argo Workflows is the most popular workflow execution engine for Kubernetes. Light-weight, scalable, and easier to use. Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments. Cloud agnostic and can run on any Kubernetes cluster. Read what people said in our latest survey Try Argo Workflows \u00b6 Access the demo environment (login using Github) Who uses Argo Workflows? \u00b6 About 200+ organizations are officially using Argo Workflows Ecosystem \u00b6 Just some of the projects that use or rely on Argo Workflows (complete list here ): Argo Events Couler Hera Katib Kedro Kubeflow Pipelines Netflix Metaflow Onepanel Orchest Ploomber Seldon SQLFlow Client Libraries \u00b6 Check out our Java, Golang and Python clients . Quickstart \u00b6 Get started here Walk-through examples Documentation \u00b6 View the docs Features \u00b6 An incomplete list of features Argo Workflows provide: UI to visualize and manage Workflows Artifact support (S3, Artifactory, Alibaba Cloud OSS, Azure Blob Storage, HTTP, Git, GCS, raw) Workflow templating to store commonly used Workflows in the cluster Archiving Workflows after executing for later access Scheduled workflows using cron Server interface with REST API (HTTP and GRPC) DAG or Steps based declaration of workflows Step level input & outputs (artifacts/parameters) Loops Parameterization Conditionals Timeouts (step & workflow level) Retry (step & workflow level) Resubmit (memoized) Suspend & Resume Cancellation K8s resource orchestration Exit Hooks (notifications, cleanup) Garbage collection of completed workflow Scheduling (affinity/tolerations/node selectors) Volumes (ephemeral/existing) Parallelism limits Daemoned steps DinD (docker-in-docker) Script steps Event emission Prometheus metrics Multiple executors Multiple pod and workflow garbage collection strategies Automatically calculated resource usage per step Java/Golang/Python SDKs Pod Disruption Budget support Single-sign on (OAuth2/OIDC) Webhook triggering CLI Out-of-the box and custom Prometheus metrics Windows container support Embedded widgets Multiplex log viewer Community Meetings \u00b6 We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here . Participation in the Argo Workflows project is governed by the CNCF Code of Conduct Community Blogs and Presentations \u00b6 Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows Argo Ansible role: Provisioning Argo Workflows on OpenShift Argo Workflows vs Apache Airflow CI/CD with Argo on Kubernetes Running Argo Workflows Across Multiple Kubernetes Clusters Open Source Model Management Roundup: Polyaxon, Argo, and Seldon Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow Argo integration review TGI Kubernetes with Joe Beda: Argo workflow system Project Resources \u00b6 Argo Project GitHub organization Argo Website Argo Slack Security \u00b6 See SECURITY.md .","title":"Home"},{"location":"#argo-workflows","text":"","title":"Argo Workflows"},{"location":"#what-is-argo-workflows","text":"Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). Define workflows where each step in the workflow is a container. Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG). Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes. Argo is a Cloud Native Computing Foundation (CNCF) graduated project.","title":"What is Argo Workflows?"},{"location":"#use-cases","text":"Machine Learning pipelines Data and batch processing Infrastructure automation CI/CD Other use cases","title":"Use Cases"},{"location":"#why-argo-workflows","text":"Argo Workflows is the most popular workflow execution engine for Kubernetes. Light-weight, scalable, and easier to use. Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments. Cloud agnostic and can run on any Kubernetes cluster. Read what people said in our latest survey","title":"Why Argo Workflows?"},{"location":"#try-argo-workflows","text":"Access the demo environment (login using Github)","title":"Try Argo Workflows"},{"location":"#who-uses-argo-workflows","text":"About 200+ organizations are officially using Argo Workflows","title":"Who uses Argo Workflows?"},{"location":"#ecosystem","text":"Just some of the projects that use or rely on Argo Workflows (complete list here ): Argo Events Couler Hera Katib Kedro Kubeflow Pipelines Netflix Metaflow Onepanel Orchest Ploomber Seldon SQLFlow","title":"Ecosystem"},{"location":"#client-libraries","text":"Check out our Java, Golang and Python clients .","title":"Client Libraries"},{"location":"#quickstart","text":"Get started here Walk-through examples","title":"Quickstart"},{"location":"#documentation","text":"View the docs","title":"Documentation"},{"location":"#features","text":"An incomplete list of features Argo Workflows provide: UI to visualize and manage Workflows Artifact support (S3, Artifactory, Alibaba Cloud OSS, Azure Blob Storage, HTTP, Git, GCS, raw) Workflow templating to store commonly used Workflows in the cluster Archiving Workflows after executing for later access Scheduled workflows using cron Server interface with REST API (HTTP and GRPC) DAG or Steps based declaration of workflows Step level input & outputs (artifacts/parameters) Loops Parameterization Conditionals Timeouts (step & workflow level) Retry (step & workflow level) Resubmit (memoized) Suspend & Resume Cancellation K8s resource orchestration Exit Hooks (notifications, cleanup) Garbage collection of completed workflow Scheduling (affinity/tolerations/node selectors) Volumes (ephemeral/existing) Parallelism limits Daemoned steps DinD (docker-in-docker) Script steps Event emission Prometheus metrics Multiple executors Multiple pod and workflow garbage collection strategies Automatically calculated resource usage per step Java/Golang/Python SDKs Pod Disruption Budget support Single-sign on (OAuth2/OIDC) Webhook triggering CLI Out-of-the box and custom Prometheus metrics Windows container support Embedded widgets Multiplex log viewer","title":"Features"},{"location":"#community-meetings","text":"We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here . Participation in the Argo Workflows project is governed by the CNCF Code of Conduct","title":"Community Meetings"},{"location":"#community-blogs-and-presentations","text":"Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows Argo Ansible role: Provisioning Argo Workflows on OpenShift Argo Workflows vs Apache Airflow CI/CD with Argo on Kubernetes Running Argo Workflows Across Multiple Kubernetes Clusters Open Source Model Management Roundup: Polyaxon, Argo, and Seldon Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow Argo integration review TGI Kubernetes with Joe Beda: Argo workflow system","title":"Community Blogs and Presentations"},{"location":"#project-resources","text":"Argo Project GitHub organization Argo Website Argo Slack","title":"Project Resources"},{"location":"#security","text":"See SECURITY.md .","title":"Security"},{"location":"CONTRIBUTING/","text":"Contributing \u00b6 How To Provide Feedback \u00b6 Please raise an issue in Github . Code of Conduct \u00b6 See CNCF Code of Conduct . Community Meetings (monthly) \u00b6 A monthly opportunity for users and maintainers of Workflows and Events to share their current work and hear about what\u2019s coming on the roadmap. Please join us! For Community Meeting information, minutes and recordings please see here . Contributor Meetings (twice monthly) \u00b6 A weekly opportunity for committers and maintainers of Workflows and Events to discuss their current work and talk about what\u2019s next. Feel free to join us! For Contributor Meeting information, minutes and recordings please see here . How To Contribute \u00b6 We're always looking for contributors. Documentation - something missing or unclear? Please submit a pull request! Code contribution - investigate a good first issue , or anything not assigned. You can work on an issue without being assigned. Join the #argo-contributors channel on our Slack . Running Locally \u00b6 To run Argo Workflows locally for development: running locally . Committing \u00b6 See the Committing Guidelines . Dependencies \u00b6 Dependencies increase the risk of security issues and have on-going maintenance costs. The dependency must pass these test: A strong use case. It has an acceptable license (e.g. MIT). It is actively maintained. It has no security issues. Example, should we add fasttemplate , view the Snyk report : Test Outcome A strong use case. \u274c Fail. We can use text/template . It has an acceptable license (e.g. MIT) \u2705 Pass. MIT license. It is actively maintained. \u274c Fail. Project is inactive. It has no security issues. \u2705 Pass. No known security issues. No, we should not add that dependency. Test Policy \u00b6 Changes without either unit or e2e tests are unlikely to be accepted. See the pull request template . Contributor Workshop \u00b6 Please check out the following resources if you are interested in contributing: 90m hands-on contributor workshop . Deep-dive into components and hands-on experiments . Architecture overview .","title":"Contributing"},{"location":"CONTRIBUTING/#contributing","text":"","title":"Contributing"},{"location":"CONTRIBUTING/#how-to-provide-feedback","text":"Please raise an issue in Github .","title":"How To Provide Feedback"},{"location":"CONTRIBUTING/#code-of-conduct","text":"See CNCF Code of Conduct .","title":"Code of Conduct"},{"location":"CONTRIBUTING/#community-meetings-monthly","text":"A monthly opportunity for users and maintainers of Workflows and Events to share their current work and hear about what\u2019s coming on the roadmap. Please join us! For Community Meeting information, minutes and recordings please see here .","title":"Community Meetings (monthly)"},{"location":"CONTRIBUTING/#contributor-meetings-twice-monthly","text":"A weekly opportunity for committers and maintainers of Workflows and Events to discuss their current work and talk about what\u2019s next. Feel free to join us! For Contributor Meeting information, minutes and recordings please see here .","title":"Contributor Meetings (twice monthly)"},{"location":"CONTRIBUTING/#how-to-contribute","text":"We're always looking for contributors. Documentation - something missing or unclear? Please submit a pull request! Code contribution - investigate a good first issue , or anything not assigned. You can work on an issue without being assigned. Join the #argo-contributors channel on our Slack .","title":"How To Contribute"},{"location":"CONTRIBUTING/#running-locally","text":"To run Argo Workflows locally for development: running locally .","title":"Running Locally"},{"location":"CONTRIBUTING/#committing","text":"See the Committing Guidelines .","title":"Committing"},{"location":"CONTRIBUTING/#dependencies","text":"Dependencies increase the risk of security issues and have on-going maintenance costs. The dependency must pass these test: A strong use case. It has an acceptable license (e.g. MIT). It is actively maintained. It has no security issues. Example, should we add fasttemplate , view the Snyk report : Test Outcome A strong use case. \u274c Fail. We can use text/template . It has an acceptable license (e.g. MIT) \u2705 Pass. MIT license. It is actively maintained. \u274c Fail. Project is inactive. It has no security issues. \u2705 Pass. No known security issues. No, we should not add that dependency.","title":"Dependencies"},{"location":"CONTRIBUTING/#test-policy","text":"Changes without either unit or e2e tests are unlikely to be accepted. See the pull request template .","title":"Test Policy"},{"location":"CONTRIBUTING/#contributor-workshop","text":"Please check out the following resources if you are interested in contributing: 90m hands-on contributor workshop . Deep-dive into components and hands-on experiments . Architecture overview .","title":"Contributor Workshop"},{"location":"access-token/","text":"Access Token \u00b6 Overview \u00b6 If you want to automate tasks with the Argo Server API or CLI, you will need an access token. Prerequisites \u00b6 Firstly, create a role with minimal permissions. This example role for jenkins only permission to update and list workflows: kubectl create role jenkins --verb = list,update --resource = workflows.argoproj.io Create a service account for your service: kubectl create sa jenkins Tip for Tokens Creation \u00b6 Create a unique service account for each client: (a) you'll be able to correctly secure your workflows (b) revoke the token without impacting other clients. Bind the service account to the role (in this case in the argo namespace): kubectl create rolebinding jenkins --role = jenkins --serviceaccount = argo:jenkins Token Creation \u00b6 You now need to create a secret to hold your token: kubectl apply -f - </oauth2/callback. It must be # browser-accessible. redirectUrl: https://argo-workflows.mydomain.com/oauth2/callback Example Helm chart configuration for authenticating against Argo CD's Dex \u00b6 argo-cd/values.yaml : dex : image : tag : v2.35.0 env : - name : ARGO_WORKFLOWS_SSO_CLIENT_SECRET valueFrom : secretKeyRef : name : argo-workflows-sso key : client-secret server : config : dex.config : | staticClients: - id: argo-workflows-sso name: Argo Workflow redirectURIs: - https://argo-workflows.mydomain.com/oauth2/callback secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET argo-workflows/values.yaml : server : extraArgs : - --auth-mode=sso sso : issuer : https://argo-cd.mydomain.com/api/dex # sessionExpiry defines how long your login is valid for in hours. (optional, default: 10h) sessionExpiry : 240h clientId : name : argo-workflows-sso key : client-id clientSecret : name : argo-workflows-sso key : client-secret redirectUrl : https://argo-workflows.mydomain.com/oauth2/callback","title":"Use Argo CD Dex for authentication"},{"location":"argo-server-sso-argocd/#use-argo-cd-dex-for-authentication","text":"It is possible to have the Argo Workflows Server use the Argo CD Dex instance for authentication, for instance if you use Okta with SAML which cannot integrate with Argo Workflows directly. In order to make this happen, you will need the following: You must be using at least Dex v2.35.0 , because that's when staticClients[].secretEnv was added. That means Argo CD 1.7.12 and above. A secret containing two keys, client-id and client-secret to be used by both Dex and Argo Workflows Server. client-id is argo-workflows-sso in this example, client-secret can be any random string. If Argo CD and Argo Workflows are installed in different namespaces the secret must be present in both of them. Example: apiVersion : v1 kind : Secret metadata : name : argo-workflows-sso data : # client-id is 'argo-workflows-sso' client-id : YXJnby13b3JrZmxvd3Mtc3Nv # client-secret is 'MY-SECRET-STRING-CAN-BE-UUID' client-secret : TVktU0VDUkVULVNUUklORy1DQU4tQkUtVVVJRA== --auth-mode=sso server argument added A Dex staticClients configured for argo-workflows-sso The sso configuration filled out in Argo Workflows Server to match","title":"Use Argo CD Dex for authentication"},{"location":"argo-server-sso-argocd/#example-manifests-for-authenticating-against-argo-cds-dex-kustomize","text":"In Argo CD, add an environment variable to Dex deployment and configuration: --- apiVersion : apps/v1 kind : Deployment metadata : name : argocd-dex-server spec : template : spec : containers : - name : dex env : - name : ARGO_WORKFLOWS_SSO_CLIENT_SECRET valueFrom : secretKeyRef : name : argo-workflows-sso key : client-secret --- apiVersion : v1 kind : ConfigMap metadata : name : argocd-cm data : # Kustomize sees the value of dex.config as a single string instead of yaml. It will not merge # Dex settings, but instead it will replace the entire configuration with the settings below, # so add these to the existing config instead of setting them in a separate file dex.config : | # Setting staticClients allows Argo Workflows to use Argo CD's Dex installation for authentication staticClients: - id: argo-workflows-sso name: Argo Workflow redirectURIs: - https://argo-workflows.mydomain.com/oauth2/callback secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET Note that the id field of staticClients must match the client-id . In Argo Workflows add --auth-mode=sso argument to argo-server deployment. --- apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : template : spec : containers : - name : argo-server args : - server - --auth-mode=sso --- apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # SSO Configuration for the Argo server. # You must also start argo server with `--auth-mode sso`. # https://argoproj.github.io/argo-workflows/argo-server-auth-mode/ sso : | # This is the root URL of the OIDC provider (required). issuer: https://argo-cd.mydomain.com/api/dex # This is name of the secret and the key in it that contain OIDC client # ID issued to the application by the provider (required). clientId: name: argo-workflows-sso key: client-id # This is name of the secret and the key in it that contain OIDC client # secret issued to the application by the provider (required). clientSecret: name: argo-workflows-sso key: client-secret # This is the redirect URL supplied to the provider (required). It must # be in the form /oauth2/callback. It must be # browser-accessible. redirectUrl: https://argo-workflows.mydomain.com/oauth2/callback","title":"Example manifests for authenticating against Argo CD's Dex (Kustomize)"},{"location":"argo-server-sso-argocd/#example-helm-chart-configuration-for-authenticating-against-argo-cds-dex","text":"argo-cd/values.yaml : dex : image : tag : v2.35.0 env : - name : ARGO_WORKFLOWS_SSO_CLIENT_SECRET valueFrom : secretKeyRef : name : argo-workflows-sso key : client-secret server : config : dex.config : | staticClients: - id: argo-workflows-sso name: Argo Workflow redirectURIs: - https://argo-workflows.mydomain.com/oauth2/callback secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET argo-workflows/values.yaml : server : extraArgs : - --auth-mode=sso sso : issuer : https://argo-cd.mydomain.com/api/dex # sessionExpiry defines how long your login is valid for in hours. (optional, default: 10h) sessionExpiry : 240h clientId : name : argo-workflows-sso key : client-id clientSecret : name : argo-workflows-sso key : client-secret redirectUrl : https://argo-workflows.mydomain.com/oauth2/callback","title":"Example Helm chart configuration for authenticating against Argo CD's Dex"},{"location":"argo-server-sso/","text":"Argo Server SSO \u00b6 v2.9 and after It is possible to use Dex for authentication. This document describes how to set up Argo Workflows and Argo CD so that Argo Workflows uses Argo CD's Dex server for authentication. To start Argo Server with SSO \u00b6 Firstly, configure the settings workflow-controller-configmap.yaml with the correct OAuth 2 values. If working towards an OIDC configuration the Argo CD project has guides on its similar (though different) process for setting up OIDC providers. It also includes examples for specific providers. The main difference is that the Argo CD docs mention that their callback address endpoint is /auth/callback . For Argo Workflows, the default format is /oauth2/callback as shown in this comment in the default values.yaml file in the helm chart. Next, create the Kubernetes secrets for holding the OAuth2 client-id and client-secret . You may refer to the kubernetes documentation on Managing secrets . For example by using kubectl with literals: kubectl create secret -n argo generic client-id-secret \\ --from-literal = client-id-key = foo kubectl create secret -n argo generic client-secret-secret \\ --from-literal = client-secret-key = bar Then, start the Argo Server using the SSO auth mode : argo server --auth-mode sso --auth-mode ... Token Revocation \u00b6 v2.12 and after As of v2.12 we issue a JWE token for users rather than give them the ID token from your OAuth2 provider. This token is opaque and has a longer expiry time (10h by default). The token encryption key is automatically generated by the Argo Server and stored in a Kubernetes secret name sso . You can revoke all tokens by deleting the encryption key and restarting the Argo Server (so it generates a new key). kubectl delete secret sso Warning The old key will be in the memory the any running Argo Server, and they will therefore accept and user with token encrypted using the old key. Every Argo Server MUST be restarted. All users will need to log in again. Sorry. SSO RBAC \u00b6 v2.12 and after You can optionally add RBAC to SSO. This allows you to give different users different access levels. Except for client auth mode, all users of the Argo Server must ultimately use a service account. So we allow you to define rules that map a user (maybe using their OIDC groups) to a service account in the same namespace as argo server by annotating the service account. To allow service accounts to manage resources in other namespaces create a role and role binding in the target namespace. RBAC config is installation-level, so any changes will need to be made by the team that installed Argo. Many complex rules will be burdensome on that team. Firstly, enable the rbac: setting in workflow-controller-configmap.yaml . You likely want to configure RBAC using groups, so add scopes: to the SSO settings: sso : # ... scopes : - groups rbac : enabled : true Note Not all OIDC providers support the groups scope. Please speak to your provider about their options. To configure a service account to be used, annotate it: apiVersion : v1 kind : ServiceAccount metadata : name : admin-user annotations : # The rule is an expression used to determine if this service account # should be used. # * `groups` - an array of the OIDC groups # * `iss` - the issuer (\"argo-server\") # * `sub` - the subject (typically the username) # Must evaluate to a boolean. # If you want an account to be the default to use, this rule can be \"true\". # Details of the expression language are available in # https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md. workflows.argoproj.io/rbac-rule : \"'admin' in groups\" # The precedence is used to determine which service account to use whe # Precedence is an integer. It may be negative. If omitted, it defaults to \"0\". # Numerically higher values have higher precedence (not lower, which maybe # counter-intuitive to you). # If two rules match and have the same precedence, then which one used will # be arbitrary. workflows.argoproj.io/rbac-rule-precedence : \"1\" If no rule matches, we deny the user access. Tip: You'll probably want to configure a default account to use if no other rule matches, e.g. a read-only account, you can do this as follows: metadata : name : read-only annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" The precedence must be the lowest of all your service accounts. As of Kubernetes v1.24, secrets for a service account token are no longer automatically created. Therefore, service account secrets for SSO RBAC must be created manually. See Manually create secrets for detailed instructions. SSO RBAC Namespace Delegation \u00b6 v3.3 and after You can optionally configure RBAC SSO per namespace. Typically, on organization has a Kubernetes cluster and a central team (the owner of the cluster) manages the cluster. Along with this, there are multiple namespaces which are owned by individual teams. This feature would help namespace owners to define RBAC for their own namespace. The feature is currently in beta. To enable the feature, set env variable SSO_DELEGATE_RBAC_TO_NAMESPACE=true in your argo-server deployment. Recommended usage \u00b6 Configure a default account in the installation namespace that allows access to all users of your organization. This service account allows a user to login to the cluster. You could optionally add a workflow read-only role and role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : user-default-login annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" Note All users MUST map to a cluster service account (such as the one above) before a namespace service account can apply. Now, for the namespace that you own, configure a service account that allows members of your team to perform operations in your namespace. Make sure that the precedence of the namespace service account is higher than the precedence of the login service account. Create an appropriate role for this service account and bind it with a role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : my-namespace-read-write-user namespace : my-namespace annotations : workflows.argoproj.io/rbac-rule : \"'my-team' in groups\" workflows.argoproj.io/rbac-rule-precedence : \"1\" With this configuration, when a user is logged in via SSO, makes a request in my-namespace , and the rbac-rule matches, this service account allows the user to perform that operation. If no service account matches in the namespace, the first service account ( user-default-login ) and its associated role will be used to perform the operation. SSO Login Time \u00b6 v2.12 and after By default, your SSO session will expire after 10 hours. You can change this by adding a sessionExpiry to your workflow-controller-configmap.yaml under the SSO heading. sso : # Expiry defines how long your login is valid for in hours. (optional) sessionExpiry : 240h Custom claims \u00b6 v3.1.4 and after If your OIDC provider provides groups information with a claim name other than groups , you could configure config-map to specify custom claim name for groups. Argo now arbitrary custom claims and any claim can be used for expr eval . However, since group information is displayed in UI, it still needs to be an array of strings with group names as elements. The customClaim in this case will be mapped to groups key and we can use the same key groups for evaluating our expressions sso : # Specify custom claim name for OIDC groups. customGroupClaimName : argo_groups If your OIDC provider provides groups information only using the user-info endpoint (e.g. Okta), you could configure userInfoPath to specify the user info endpoint that contains the groups claim. sso : userInfoPath : /oauth2/v1/userinfo Example Expression \u00b6 # assuming customClaimGroupName: argo_groups workflows.argoproj.io/rbac-rule: \"'argo_admins' in groups\" Filtering groups \u00b6 v3.5 and above You can configure filterGroupsRegex to filter the groups returned by the OIDC provider. Some use-cases for this include: You have multiple applications using the same OIDC provider, and you only want to use groups that are relevant to Argo Workflows. You have many groups and exceed the 4KB cookie size limit (cookies are used to store authentication tokens). If this occurs, login will fail. sso : # Specify a list of regular expressions to filter the groups returned by the OIDC provider. # A logical \"OR\" is used between each regex in the list filterGroupsRegex : - \".*argo-wf.*\" - \".*argo-workflow.*\"","title":"Argo Server SSO"},{"location":"argo-server-sso/#argo-server-sso","text":"v2.9 and after It is possible to use Dex for authentication. This document describes how to set up Argo Workflows and Argo CD so that Argo Workflows uses Argo CD's Dex server for authentication.","title":"Argo Server SSO"},{"location":"argo-server-sso/#to-start-argo-server-with-sso","text":"Firstly, configure the settings workflow-controller-configmap.yaml with the correct OAuth 2 values. If working towards an OIDC configuration the Argo CD project has guides on its similar (though different) process for setting up OIDC providers. It also includes examples for specific providers. The main difference is that the Argo CD docs mention that their callback address endpoint is /auth/callback . For Argo Workflows, the default format is /oauth2/callback as shown in this comment in the default values.yaml file in the helm chart. Next, create the Kubernetes secrets for holding the OAuth2 client-id and client-secret . You may refer to the kubernetes documentation on Managing secrets . For example by using kubectl with literals: kubectl create secret -n argo generic client-id-secret \\ --from-literal = client-id-key = foo kubectl create secret -n argo generic client-secret-secret \\ --from-literal = client-secret-key = bar Then, start the Argo Server using the SSO auth mode : argo server --auth-mode sso --auth-mode ...","title":"To start Argo Server with SSO"},{"location":"argo-server-sso/#token-revocation","text":"v2.12 and after As of v2.12 we issue a JWE token for users rather than give them the ID token from your OAuth2 provider. This token is opaque and has a longer expiry time (10h by default). The token encryption key is automatically generated by the Argo Server and stored in a Kubernetes secret name sso . You can revoke all tokens by deleting the encryption key and restarting the Argo Server (so it generates a new key). kubectl delete secret sso Warning The old key will be in the memory the any running Argo Server, and they will therefore accept and user with token encrypted using the old key. Every Argo Server MUST be restarted. All users will need to log in again. Sorry.","title":"Token Revocation"},{"location":"argo-server-sso/#sso-rbac","text":"v2.12 and after You can optionally add RBAC to SSO. This allows you to give different users different access levels. Except for client auth mode, all users of the Argo Server must ultimately use a service account. So we allow you to define rules that map a user (maybe using their OIDC groups) to a service account in the same namespace as argo server by annotating the service account. To allow service accounts to manage resources in other namespaces create a role and role binding in the target namespace. RBAC config is installation-level, so any changes will need to be made by the team that installed Argo. Many complex rules will be burdensome on that team. Firstly, enable the rbac: setting in workflow-controller-configmap.yaml . You likely want to configure RBAC using groups, so add scopes: to the SSO settings: sso : # ... scopes : - groups rbac : enabled : true Note Not all OIDC providers support the groups scope. Please speak to your provider about their options. To configure a service account to be used, annotate it: apiVersion : v1 kind : ServiceAccount metadata : name : admin-user annotations : # The rule is an expression used to determine if this service account # should be used. # * `groups` - an array of the OIDC groups # * `iss` - the issuer (\"argo-server\") # * `sub` - the subject (typically the username) # Must evaluate to a boolean. # If you want an account to be the default to use, this rule can be \"true\". # Details of the expression language are available in # https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md. workflows.argoproj.io/rbac-rule : \"'admin' in groups\" # The precedence is used to determine which service account to use whe # Precedence is an integer. It may be negative. If omitted, it defaults to \"0\". # Numerically higher values have higher precedence (not lower, which maybe # counter-intuitive to you). # If two rules match and have the same precedence, then which one used will # be arbitrary. workflows.argoproj.io/rbac-rule-precedence : \"1\" If no rule matches, we deny the user access. Tip: You'll probably want to configure a default account to use if no other rule matches, e.g. a read-only account, you can do this as follows: metadata : name : read-only annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" The precedence must be the lowest of all your service accounts. As of Kubernetes v1.24, secrets for a service account token are no longer automatically created. Therefore, service account secrets for SSO RBAC must be created manually. See Manually create secrets for detailed instructions.","title":"SSO RBAC"},{"location":"argo-server-sso/#sso-rbac-namespace-delegation","text":"v3.3 and after You can optionally configure RBAC SSO per namespace. Typically, on organization has a Kubernetes cluster and a central team (the owner of the cluster) manages the cluster. Along with this, there are multiple namespaces which are owned by individual teams. This feature would help namespace owners to define RBAC for their own namespace. The feature is currently in beta. To enable the feature, set env variable SSO_DELEGATE_RBAC_TO_NAMESPACE=true in your argo-server deployment.","title":"SSO RBAC Namespace Delegation"},{"location":"argo-server-sso/#recommended-usage","text":"Configure a default account in the installation namespace that allows access to all users of your organization. This service account allows a user to login to the cluster. You could optionally add a workflow read-only role and role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : user-default-login annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" Note All users MUST map to a cluster service account (such as the one above) before a namespace service account can apply. Now, for the namespace that you own, configure a service account that allows members of your team to perform operations in your namespace. Make sure that the precedence of the namespace service account is higher than the precedence of the login service account. Create an appropriate role for this service account and bind it with a role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : my-namespace-read-write-user namespace : my-namespace annotations : workflows.argoproj.io/rbac-rule : \"'my-team' in groups\" workflows.argoproj.io/rbac-rule-precedence : \"1\" With this configuration, when a user is logged in via SSO, makes a request in my-namespace , and the rbac-rule matches, this service account allows the user to perform that operation. If no service account matches in the namespace, the first service account ( user-default-login ) and its associated role will be used to perform the operation.","title":"Recommended usage"},{"location":"argo-server-sso/#sso-login-time","text":"v2.12 and after By default, your SSO session will expire after 10 hours. You can change this by adding a sessionExpiry to your workflow-controller-configmap.yaml under the SSO heading. sso : # Expiry defines how long your login is valid for in hours. (optional) sessionExpiry : 240h","title":"SSO Login Time"},{"location":"argo-server-sso/#custom-claims","text":"v3.1.4 and after If your OIDC provider provides groups information with a claim name other than groups , you could configure config-map to specify custom claim name for groups. Argo now arbitrary custom claims and any claim can be used for expr eval . However, since group information is displayed in UI, it still needs to be an array of strings with group names as elements. The customClaim in this case will be mapped to groups key and we can use the same key groups for evaluating our expressions sso : # Specify custom claim name for OIDC groups. customGroupClaimName : argo_groups If your OIDC provider provides groups information only using the user-info endpoint (e.g. Okta), you could configure userInfoPath to specify the user info endpoint that contains the groups claim. sso : userInfoPath : /oauth2/v1/userinfo","title":"Custom claims"},{"location":"argo-server-sso/#example-expression","text":"# assuming customClaimGroupName: argo_groups workflows.argoproj.io/rbac-rule: \"'argo_admins' in groups\"","title":"Example Expression"},{"location":"argo-server-sso/#filtering-groups","text":"v3.5 and above You can configure filterGroupsRegex to filter the groups returned by the OIDC provider. Some use-cases for this include: You have multiple applications using the same OIDC provider, and you only want to use groups that are relevant to Argo Workflows. You have many groups and exceed the 4KB cookie size limit (cookies are used to store authentication tokens). If this occurs, login will fail. sso : # Specify a list of regular expressions to filter the groups returned by the OIDC provider. # A logical \"OR\" is used between each regex in the list filterGroupsRegex : - \".*argo-wf.*\" - \".*argo-workflow.*\"","title":"Filtering groups"},{"location":"argo-server/","text":"Argo Server \u00b6 v2.5 and after HTTP vs HTTPS Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. The Argo Server is a server that exposes an API and UI for workflows. You'll need to use this if you want to offload large workflows or the workflow archive . You can run this in either \"hosted\" or \"local\" mode. It replaces the Argo UI. Hosted Mode \u00b6 Use this mode if: You want a drop-in replacement for the Argo UI. If you need to prevent users from directly accessing the database. Hosted mode is provided as part of the standard manifests , specifically in argo-server-deployment.yaml . Local Mode \u00b6 Use this mode if: You want something that does not require complex set-up. You do not need to run a database. To run locally: argo server This will start a server on port 2746 which you can view . Options \u00b6 Auth Mode \u00b6 See auth . Managed Namespace \u00b6 See managed namespace . Base HREF \u00b6 If the server is running behind reverse proxy with a sub-path different from / (for example, /argo ), you can set an alternative sub-path with the --basehref flag or the BASE_HREF environment variable. You probably now should read how to set-up an ingress Transport Layer Security \u00b6 See TLS . SSO \u00b6 See SSO . See here about sharing Argo CD's Dex with Argo Workflows. Access the Argo Workflows UI \u00b6 By default, the Argo UI service is not exposed with an external IP. To access the UI, use one of the following: kubectl port-forward \u00b6 kubectl -n argo port-forward svc/argo-server 2746 :2746 Then visit: https://localhost:2746 Expose a LoadBalancer \u00b6 Update the service to be of type LoadBalancer . kubectl patch svc argo-server -n argo -p '{\"spec\": {\"type\": \"LoadBalancer\"}}' Then wait for the external IP to be made available: kubectl get svc argo-server -n argo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE argo-server LoadBalancer 10 .43.43.130 172 .18.0.2 2746 :30008/TCP 18h Ingress \u00b6 You can get ingress working as follows: Add BASE_HREF as environment variable to deployment/argo-server . Do not forget to add a trailing '/' character. --- apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server env : - name : BASE_HREF value : /argo/ image : argoproj/argocli:latest name : argo-server ... Create a ingress, with the annotation ingress.kubernetes.io/rewrite-target: / : If TLS is enabled (default in v3.0 and after), the ingress controller must be told that the backend uses HTTPS. The method depends on the ingress controller, e.g. Traefik expects an ingress.kubernetes.io/protocol annotation, while ingress-nginx uses nginx.ingress.kubernetes.io/backend-protocol apiVersion : networking.k8s.io/v1beta1 kind : Ingress metadata : name : argo-server annotations : ingress.kubernetes.io/rewrite-target : /$2 ingress.kubernetes.io/protocol : https # Traefik nginx.ingress.kubernetes.io/backend-protocol : https # ingress-nginx spec : rules : - http : paths : - backend : serviceName : argo-server servicePort : 2746 path : /argo(/|$)(.*) Learn more Security \u00b6 Users should consider the following in their set-up of the Argo Server: API Authentication Rate Limiting \u00b6 Argo Server does not perform authentication directly. It delegates this to either the Kubernetes API Server (when --auth-mode=client ) and the OAuth provider (when --auth-mode=sso ). In each case, it is recommended that the delegate implements any authentication rate limiting you need. IP Address Logging \u00b6 Argo Server does not log the IP addresses of API requests. We recommend you put the Argo Server behind a load balancer, and that load balancer is configured to log the IP addresses of requests that return authentication or authorization errors. Rate Limiting \u00b6 v3.4 and after Argo Server by default rate limits to 1000 per IP per minute, you can configure it through --api-rate-limit . You can access additional information through the following headers. X-Rate-Limit-Limit - the rate limit ceiling that is applicable for the current request. X-Rate-Limit-Remaining - the number of requests left for the current rate-limit window. X-Rate-Limit-Reset - the time at which the rate limit resets, specified in UTC time. Retry-After - indicate when a client should retry requests (when the rate limit expires), in UTC time.","title":"Argo Server"},{"location":"argo-server/#argo-server","text":"v2.5 and after HTTP vs HTTPS Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. The Argo Server is a server that exposes an API and UI for workflows. You'll need to use this if you want to offload large workflows or the workflow archive . You can run this in either \"hosted\" or \"local\" mode. It replaces the Argo UI.","title":"Argo Server"},{"location":"argo-server/#hosted-mode","text":"Use this mode if: You want a drop-in replacement for the Argo UI. If you need to prevent users from directly accessing the database. Hosted mode is provided as part of the standard manifests , specifically in argo-server-deployment.yaml .","title":"Hosted Mode"},{"location":"argo-server/#local-mode","text":"Use this mode if: You want something that does not require complex set-up. You do not need to run a database. To run locally: argo server This will start a server on port 2746 which you can view .","title":"Local Mode"},{"location":"argo-server/#options","text":"","title":"Options"},{"location":"argo-server/#auth-mode","text":"See auth .","title":"Auth Mode"},{"location":"argo-server/#managed-namespace","text":"See managed namespace .","title":"Managed Namespace"},{"location":"argo-server/#base-href","text":"If the server is running behind reverse proxy with a sub-path different from / (for example, /argo ), you can set an alternative sub-path with the --basehref flag or the BASE_HREF environment variable. You probably now should read how to set-up an ingress","title":"Base HREF"},{"location":"argo-server/#transport-layer-security","text":"See TLS .","title":"Transport Layer Security"},{"location":"argo-server/#sso","text":"See SSO . See here about sharing Argo CD's Dex with Argo Workflows.","title":"SSO"},{"location":"argo-server/#access-the-argo-workflows-ui","text":"By default, the Argo UI service is not exposed with an external IP. To access the UI, use one of the following:","title":"Access the Argo Workflows UI"},{"location":"argo-server/#kubectl-port-forward","text":"kubectl -n argo port-forward svc/argo-server 2746 :2746 Then visit: https://localhost:2746","title":"kubectl port-forward"},{"location":"argo-server/#expose-a-loadbalancer","text":"Update the service to be of type LoadBalancer . kubectl patch svc argo-server -n argo -p '{\"spec\": {\"type\": \"LoadBalancer\"}}' Then wait for the external IP to be made available: kubectl get svc argo-server -n argo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE argo-server LoadBalancer 10 .43.43.130 172 .18.0.2 2746 :30008/TCP 18h","title":"Expose a LoadBalancer"},{"location":"argo-server/#ingress","text":"You can get ingress working as follows: Add BASE_HREF as environment variable to deployment/argo-server . Do not forget to add a trailing '/' character. --- apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server env : - name : BASE_HREF value : /argo/ image : argoproj/argocli:latest name : argo-server ... Create a ingress, with the annotation ingress.kubernetes.io/rewrite-target: / : If TLS is enabled (default in v3.0 and after), the ingress controller must be told that the backend uses HTTPS. The method depends on the ingress controller, e.g. Traefik expects an ingress.kubernetes.io/protocol annotation, while ingress-nginx uses nginx.ingress.kubernetes.io/backend-protocol apiVersion : networking.k8s.io/v1beta1 kind : Ingress metadata : name : argo-server annotations : ingress.kubernetes.io/rewrite-target : /$2 ingress.kubernetes.io/protocol : https # Traefik nginx.ingress.kubernetes.io/backend-protocol : https # ingress-nginx spec : rules : - http : paths : - backend : serviceName : argo-server servicePort : 2746 path : /argo(/|$)(.*) Learn more","title":"Ingress"},{"location":"argo-server/#security","text":"Users should consider the following in their set-up of the Argo Server:","title":"Security"},{"location":"argo-server/#api-authentication-rate-limiting","text":"Argo Server does not perform authentication directly. It delegates this to either the Kubernetes API Server (when --auth-mode=client ) and the OAuth provider (when --auth-mode=sso ). In each case, it is recommended that the delegate implements any authentication rate limiting you need.","title":"API Authentication Rate Limiting"},{"location":"argo-server/#ip-address-logging","text":"Argo Server does not log the IP addresses of API requests. We recommend you put the Argo Server behind a load balancer, and that load balancer is configured to log the IP addresses of requests that return authentication or authorization errors.","title":"IP Address Logging"},{"location":"argo-server/#rate-limiting","text":"v3.4 and after Argo Server by default rate limits to 1000 per IP per minute, you can configure it through --api-rate-limit . You can access additional information through the following headers. X-Rate-Limit-Limit - the rate limit ceiling that is applicable for the current request. X-Rate-Limit-Remaining - the number of requests left for the current rate-limit window. X-Rate-Limit-Reset - the time at which the rate limit resets, specified in UTC time. Retry-After - indicate when a client should retry requests (when the rate limit expires), in UTC time.","title":"Rate Limiting"},{"location":"artifact-repository-ref/","text":"Artifact Repository Ref \u00b6 v2.9 and after You can reduce duplication in your templates by configuring repositories that can be accessed by any workflow. This can also remove sensitive information from your templates. Create a suitable config map in either (a) your workflows namespace or (b) in the managed namespace: apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : my-artifact-repository annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-v1-s3-artifact-repository data : default-v1-s3-artifact-repository : | s3: bucket: my-bucket endpoint: minio:9000 insecure: true accessKeySecret: name: my-minio-cred key: accesskey secretKeySecret: name: my-minio-cred key: secretkey v2-s3-artifact-repository : | s3: ... You can override the artifact repository for a workflow as follows: spec : artifactRepositoryRef : configMap : my-artifact-repository # default is \"artifact-repositories\" key : v2-s3-artifact-repository # default can be set by the `workflows.argoproj.io/default-artifact-repository` annotation in config map. This feature gives maximum benefit when used with key-only artifacts . Reference .","title":"Artifact Repository Ref"},{"location":"artifact-repository-ref/#artifact-repository-ref","text":"v2.9 and after You can reduce duplication in your templates by configuring repositories that can be accessed by any workflow. This can also remove sensitive information from your templates. Create a suitable config map in either (a) your workflows namespace or (b) in the managed namespace: apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : my-artifact-repository annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-v1-s3-artifact-repository data : default-v1-s3-artifact-repository : | s3: bucket: my-bucket endpoint: minio:9000 insecure: true accessKeySecret: name: my-minio-cred key: accesskey secretKeySecret: name: my-minio-cred key: secretkey v2-s3-artifact-repository : | s3: ... You can override the artifact repository for a workflow as follows: spec : artifactRepositoryRef : configMap : my-artifact-repository # default is \"artifact-repositories\" key : v2-s3-artifact-repository # default can be set by the `workflows.argoproj.io/default-artifact-repository` annotation in config map. This feature gives maximum benefit when used with key-only artifacts . Reference .","title":"Artifact Repository Ref"},{"location":"artifact-visualization/","text":"Artifact Visualization \u00b6 since v3.4 Artifacts can be viewed in the UI. Use cases: Comparing ML pipeline runs from generated charts. Visualizing end results of ML pipeline runs. Debugging workflows where visual artifacts are the most helpful. Artifacts appear as elements in the workflow DAG that you can click on. When you click on the artifact, a panel appears. The first time this appears explanatory text is shown to help you understand if you might need to change your workflows to use this feature. Known file types such as images, text or HTML are shown in an inline-frame ( iframe ). Artifacts are sandboxed using a Content-Security-Policy that prevents JavaScript execution. JSON is shown with syntax highlighting. To start, take a look at the example . Artifact Types \u00b6 An artifact maybe a .tgz , file or directory. .tgz \u00b6 Viewing of .tgz is not supported in the UI. By default artifacts are compressed as a .tgz . Only artifacts that were not compressed can be viewed. To prevent compression, set archive to none to prevent compression: - name : artifact # ... archive : none : { } File \u00b6 Files maybe shown in the UI. To determine if a file can be shown, the UI checks if the artifact's file extension is supported. The extension is found in the artifact's key. To view a file, add the extension to the key: - name : single-file s3 : key : visualization.png Directory \u00b6 Directories are shown in the UI. The UI considers any key with a trailing-slash to be a directory. To view a directory, add a trailing-slash: - name : reports s3 : key : reports/ If the directory contains index.html , then that will be shown, otherwise a directory listing is displayed. \u26a0\ufe0f HTML files may contain CSS and images served from the same origin. Scripts are not allowed. Nothing may be remotely loaded. Security \u00b6 Content Security Policy \u00b6 We assume that artifacts are not trusted, so by default, artifacts are served with a Content-Security-Policy that disables JavaScript and remote files. This is similar to what happens when you include third-party scripts, such as analytic tracking, in your website. However, those tracking codes are normally served from a different domain to your main website. Artifacts are served from the same origin, so normal browser controls are not secure enough. Sub-Path Access \u00b6 Previously, users could access the artifacts of any workflows they could access. To allow HTML files to link to other files within their tree, you can now access any sub-paths of the artifact's key. Example: The artifact produces a folder in an S3 bucket named my-bucket , with a key report/ . You can also access anything matching report/* .","title":"Artifact Visualization"},{"location":"artifact-visualization/#artifact-visualization","text":"since v3.4 Artifacts can be viewed in the UI. Use cases: Comparing ML pipeline runs from generated charts. Visualizing end results of ML pipeline runs. Debugging workflows where visual artifacts are the most helpful. Artifacts appear as elements in the workflow DAG that you can click on. When you click on the artifact, a panel appears. The first time this appears explanatory text is shown to help you understand if you might need to change your workflows to use this feature. Known file types such as images, text or HTML are shown in an inline-frame ( iframe ). Artifacts are sandboxed using a Content-Security-Policy that prevents JavaScript execution. JSON is shown with syntax highlighting. To start, take a look at the example .","title":"Artifact Visualization"},{"location":"artifact-visualization/#artifact-types","text":"An artifact maybe a .tgz , file or directory.","title":"Artifact Types"},{"location":"artifact-visualization/#tgz","text":"Viewing of .tgz is not supported in the UI. By default artifacts are compressed as a .tgz . Only artifacts that were not compressed can be viewed. To prevent compression, set archive to none to prevent compression: - name : artifact # ... archive : none : { }","title":".tgz"},{"location":"artifact-visualization/#file","text":"Files maybe shown in the UI. To determine if a file can be shown, the UI checks if the artifact's file extension is supported. The extension is found in the artifact's key. To view a file, add the extension to the key: - name : single-file s3 : key : visualization.png","title":"File"},{"location":"artifact-visualization/#directory","text":"Directories are shown in the UI. The UI considers any key with a trailing-slash to be a directory. To view a directory, add a trailing-slash: - name : reports s3 : key : reports/ If the directory contains index.html , then that will be shown, otherwise a directory listing is displayed. \u26a0\ufe0f HTML files may contain CSS and images served from the same origin. Scripts are not allowed. Nothing may be remotely loaded.","title":"Directory"},{"location":"artifact-visualization/#security","text":"","title":"Security"},{"location":"artifact-visualization/#content-security-policy","text":"We assume that artifacts are not trusted, so by default, artifacts are served with a Content-Security-Policy that disables JavaScript and remote files. This is similar to what happens when you include third-party scripts, such as analytic tracking, in your website. However, those tracking codes are normally served from a different domain to your main website. Artifacts are served from the same origin, so normal browser controls are not secure enough.","title":"Content Security Policy"},{"location":"artifact-visualization/#sub-path-access","text":"Previously, users could access the artifacts of any workflows they could access. To allow HTML files to link to other files within their tree, you can now access any sub-paths of the artifact's key. Example: The artifact produces a folder in an S3 bucket named my-bucket , with a key report/ . You can also access anything matching report/* .","title":"Sub-Path Access"},{"location":"async-pattern/","text":"Asynchronous Job Pattern \u00b6 Introduction \u00b6 If triggering an external job (e.g. an Amazon EMR job) from Argo that does not run to completion in a container, there are two options: create a container that polls the external job completion status combine a trigger step that starts the job with a suspend step that is resumed by an API call to Argo when the external job is complete. This document describes the second option in more detail. The pattern \u00b6 The pattern involves two steps - the first step is a short-running step that triggers a long-running job outside Argo (e.g. an HTTP submission), and the second step is a suspend step that suspends workflow execution and is ultimately either resumed or stopped (i.e. failed) via a call to the Argo API when the job outside Argo succeeds or fails. When implemented as a WorkflowTemplate it can look something like this: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : external-job-template spec : entrypoint : run-external-job arguments : parameters : - name : \"job-cmd\" templates : - name : run-external-job inputs : parameters : - name : \"job-cmd\" value : \"{{workflow.parameters.job-cmd}}\" steps : - - name : trigger-job template : trigger-job arguments : parameters : - name : \"job-cmd\" value : \"{{inputs.parameters.job-cmd}}\" - - name : wait-completion template : wait-completion arguments : parameters : - name : uuid value : \"{{steps.trigger-job.outputs.result}}\" - name : trigger-job inputs : parameters : - name : \"job-cmd\" container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.job-cmd}}\" ] - name : wait-completion inputs : parameters : - name : uuid suspend : { } In this case the job-cmd parameter can be a command that makes an HTTP call via curl to an endpoint that returns a job UUID. More sophisticated submission and parsing of submission output could be done with something like a Python script step. On job completion the external job would need to call either resume if successful: You may need an access token . curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///resume --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\" }' or stop if unsuccessful: curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///stop --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\", \"message\": \"\" }' Retrying failed jobs \u00b6 Using argo retry on failed jobs that follow this pattern will cause Argo to re-attempt the suspend step without re-triggering the job. Instead you need to use the --restart-successful option, e.g. if using the template from above: argo retry --restart-successful --node-field-selector templateRef.template = run-external-job,phase = Failed","title":"Asynchronous Job Pattern"},{"location":"async-pattern/#asynchronous-job-pattern","text":"","title":"Asynchronous Job Pattern"},{"location":"async-pattern/#introduction","text":"If triggering an external job (e.g. an Amazon EMR job) from Argo that does not run to completion in a container, there are two options: create a container that polls the external job completion status combine a trigger step that starts the job with a suspend step that is resumed by an API call to Argo when the external job is complete. This document describes the second option in more detail.","title":"Introduction"},{"location":"async-pattern/#the-pattern","text":"The pattern involves two steps - the first step is a short-running step that triggers a long-running job outside Argo (e.g. an HTTP submission), and the second step is a suspend step that suspends workflow execution and is ultimately either resumed or stopped (i.e. failed) via a call to the Argo API when the job outside Argo succeeds or fails. When implemented as a WorkflowTemplate it can look something like this: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : external-job-template spec : entrypoint : run-external-job arguments : parameters : - name : \"job-cmd\" templates : - name : run-external-job inputs : parameters : - name : \"job-cmd\" value : \"{{workflow.parameters.job-cmd}}\" steps : - - name : trigger-job template : trigger-job arguments : parameters : - name : \"job-cmd\" value : \"{{inputs.parameters.job-cmd}}\" - - name : wait-completion template : wait-completion arguments : parameters : - name : uuid value : \"{{steps.trigger-job.outputs.result}}\" - name : trigger-job inputs : parameters : - name : \"job-cmd\" container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.job-cmd}}\" ] - name : wait-completion inputs : parameters : - name : uuid suspend : { } In this case the job-cmd parameter can be a command that makes an HTTP call via curl to an endpoint that returns a job UUID. More sophisticated submission and parsing of submission output could be done with something like a Python script step. On job completion the external job would need to call either resume if successful: You may need an access token . curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///resume --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\" }' or stop if unsuccessful: curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///stop --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\", \"message\": \"\" }'","title":"The pattern"},{"location":"async-pattern/#retrying-failed-jobs","text":"Using argo retry on failed jobs that follow this pattern will cause Argo to re-attempt the suspend step without re-triggering the job. Instead you need to use the --restart-successful option, e.g. if using the template from above: argo retry --restart-successful --node-field-selector templateRef.template = run-external-job,phase = Failed","title":"Retrying failed jobs"},{"location":"client-libraries/","text":"Client Libraries \u00b6 This page contains an overview of the client libraries for using the Argo API from various programming languages. To write applications using the REST API, you do not need to implement the API calls and request/response types yourself. You can use a client library for the programming language you are using. Client libraries often handle common tasks such as authentication for you. Auto-generated client libraries \u00b6 The following client libraries are auto-generated using OpenAPI Generator . Please expect very minimal support from the Argo team. Language Client Library Examples/Docs Golang apiclient.go Example Java Java Python Python Community-maintained client libraries \u00b6 The following client libraries are provided and maintained by their authors, not the Argo team. Language Client Library Examples/Docs Python Couler Multi-workflow engine support Python SDK Python Hera Easy and accessible Argo workflows construction and submission in Python","title":"Client Libraries"},{"location":"client-libraries/#client-libraries","text":"This page contains an overview of the client libraries for using the Argo API from various programming languages. To write applications using the REST API, you do not need to implement the API calls and request/response types yourself. You can use a client library for the programming language you are using. Client libraries often handle common tasks such as authentication for you.","title":"Client Libraries"},{"location":"client-libraries/#auto-generated-client-libraries","text":"The following client libraries are auto-generated using OpenAPI Generator . Please expect very minimal support from the Argo team. Language Client Library Examples/Docs Golang apiclient.go Example Java Java Python Python","title":"Auto-generated client libraries"},{"location":"client-libraries/#community-maintained-client-libraries","text":"The following client libraries are provided and maintained by their authors, not the Argo team. Language Client Library Examples/Docs Python Couler Multi-workflow engine support Python SDK Python Hera Easy and accessible Argo workflows construction and submission in Python","title":"Community-maintained client libraries"},{"location":"cluster-workflow-templates/","text":"Cluster Workflow Templates \u00b6 v2.8 and after Introduction \u00b6 ClusterWorkflowTemplates are cluster scoped WorkflowTemplates . ClusterWorkflowTemplate can be created cluster scoped like ClusterRole and can be accessed across all namespaces in the cluster. WorkflowTemplates documentation link Defining ClusterWorkflowTemplate \u00b6 apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-whalesay-template spec : templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Referencing other ClusterWorkflowTemplates \u00b6 You can reference templates from other ClusterWorkflowTemplates using a templateRef field with clusterScope: true . Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate or ClusterWorkflowTemplate\" using this field name : cluster-workflow-template-whalesay-template # This is the name of the \"WorkflowTemplate or ClusterWorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference clusterScope : true # This field indicates this templateRef is pointing ClusterWorkflowTemplate arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" 2.9 and after Create Workflow from ClusterWorkflowTemplate Spec \u00b6 You can create Workflow from ClusterWorkflowTemplate spec using workflowTemplateRef with clusterScope: true . If you pass the arguments to created Workflow , it will be merged with cluster workflow template arguments Here is an example for ClusterWorkflowTemplate with entrypoint and arguments apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Here is an example for creating ClusterWorkflowTemplate as Workflow with passing entrypoint and arguments to ClusterWorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true Here is an example of a creating WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true Managing ClusterWorkflowTemplates \u00b6 CLI \u00b6 You can create some example templates as follows: argo cluster-template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/clustertemplates.yaml The submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml 2.7 and after The submit a ClusterWorkflowTemplate as a Workflow : argo submit --from clusterworkflowtemplate/cluster-workflow-template-submittable kubectl \u00b6 Using kubectl apply -f and kubectl get cwft UI \u00b6 ClusterWorkflowTemplate resources can also be managed by the UI","title":"Cluster Workflow Templates"},{"location":"cluster-workflow-templates/#cluster-workflow-templates","text":"v2.8 and after","title":"Cluster Workflow Templates"},{"location":"cluster-workflow-templates/#introduction","text":"ClusterWorkflowTemplates are cluster scoped WorkflowTemplates . ClusterWorkflowTemplate can be created cluster scoped like ClusterRole and can be accessed across all namespaces in the cluster. WorkflowTemplates documentation link","title":"Introduction"},{"location":"cluster-workflow-templates/#defining-clusterworkflowtemplate","text":"apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-whalesay-template spec : templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ]","title":"Defining ClusterWorkflowTemplate"},{"location":"cluster-workflow-templates/#referencing-other-clusterworkflowtemplates","text":"You can reference templates from other ClusterWorkflowTemplates using a templateRef field with clusterScope: true . Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate or ClusterWorkflowTemplate\" using this field name : cluster-workflow-template-whalesay-template # This is the name of the \"WorkflowTemplate or ClusterWorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference clusterScope : true # This field indicates this templateRef is pointing ClusterWorkflowTemplate arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" 2.9 and after","title":"Referencing other ClusterWorkflowTemplates"},{"location":"cluster-workflow-templates/#create-workflow-from-clusterworkflowtemplate-spec","text":"You can create Workflow from ClusterWorkflowTemplate spec using workflowTemplateRef with clusterScope: true . If you pass the arguments to created Workflow , it will be merged with cluster workflow template arguments Here is an example for ClusterWorkflowTemplate with entrypoint and arguments apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Here is an example for creating ClusterWorkflowTemplate as Workflow with passing entrypoint and arguments to ClusterWorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true Here is an example of a creating WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true","title":"Create Workflow from ClusterWorkflowTemplate Spec"},{"location":"cluster-workflow-templates/#managing-clusterworkflowtemplates","text":"","title":"Managing ClusterWorkflowTemplates"},{"location":"cluster-workflow-templates/#cli","text":"You can create some example templates as follows: argo cluster-template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/clustertemplates.yaml The submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml 2.7 and after The submit a ClusterWorkflowTemplate as a Workflow : argo submit --from clusterworkflowtemplate/cluster-workflow-template-submittable","title":"CLI"},{"location":"cluster-workflow-templates/#kubectl","text":"Using kubectl apply -f and kubectl get cwft","title":"kubectl"},{"location":"cluster-workflow-templates/#ui","text":"ClusterWorkflowTemplate resources can also be managed by the UI","title":"UI"},{"location":"conditional-artifacts-parameters/","text":"Conditional Artifacts and Parameters \u00b6 v3.1 and after You can set Step/DAG level artifacts or parameters based on an expression . Use fromExpression under a Step/DAG level output artifact and expression under a Step/DAG level output parameter. Conditional Artifacts \u00b6 - name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : artifacts : - name : result fromExpression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.artifacts.headsresult : steps.tails.outputs.artifacts.tailsresult\" Steps artifacts example DAG artifacts example Conditional Parameters \u00b6 - name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : parameters : - name : stepresult valueFrom : expression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.result : steps.tails.outputs.result\" Steps parameter example DAG parameter example Advanced example: fibonacci Sequence","title":"Conditional Artifacts and Parameters"},{"location":"conditional-artifacts-parameters/#conditional-artifacts-and-parameters","text":"v3.1 and after You can set Step/DAG level artifacts or parameters based on an expression . Use fromExpression under a Step/DAG level output artifact and expression under a Step/DAG level output parameter.","title":"Conditional Artifacts and Parameters"},{"location":"conditional-artifacts-parameters/#conditional-artifacts","text":"- name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : artifacts : - name : result fromExpression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.artifacts.headsresult : steps.tails.outputs.artifacts.tailsresult\" Steps artifacts example DAG artifacts example","title":"Conditional Artifacts"},{"location":"conditional-artifacts-parameters/#conditional-parameters","text":"- name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : parameters : - name : stepresult valueFrom : expression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.result : steps.tails.outputs.result\" Steps parameter example DAG parameter example Advanced example: fibonacci Sequence","title":"Conditional Parameters"},{"location":"configure-archive-logs/","text":"Configuring Archive Logs \u00b6 \u26a0\ufe0f We do not recommend you rely on Argo Workflows to archive logs. Instead, use a conventional Kubernetes logging facility. To enable automatic pipeline logging, you need to configure archiveLogs at workflow-controller config-map, workflow spec, or template level. You also need to configure Artifact Repository to define where this logging artifact is stored. Archive logs follows priorities: workflow-controller config (on) > workflow spec (on/off) > template (on/off) Controller Config Map Workflow Spec Template are we archiving logs? true true true true true true false true true false true true true false false true false true true true false true false false false false true true false false false false Configuring Workflow Controller Config Map \u00b6 See Workflow Controller Config Map Configuring Workflow Spec \u00b6 apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : archiveLogs : true entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Configuring Workflow Template \u00b6 apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] archiveLocation : archiveLogs : true","title":"Configuring Archive Logs"},{"location":"configure-archive-logs/#configuring-archive-logs","text":"\u26a0\ufe0f We do not recommend you rely on Argo Workflows to archive logs. Instead, use a conventional Kubernetes logging facility. To enable automatic pipeline logging, you need to configure archiveLogs at workflow-controller config-map, workflow spec, or template level. You also need to configure Artifact Repository to define where this logging artifact is stored. Archive logs follows priorities: workflow-controller config (on) > workflow spec (on/off) > template (on/off) Controller Config Map Workflow Spec Template are we archiving logs? true true true true true true false true true false true true true false false true false true true true false true false false false false true true false false false false","title":"Configuring Archive Logs"},{"location":"configure-archive-logs/#configuring-workflow-controller-config-map","text":"See Workflow Controller Config Map","title":"Configuring Workflow Controller Config Map"},{"location":"configure-archive-logs/#configuring-workflow-spec","text":"apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : archiveLogs : true entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ]","title":"Configuring Workflow Spec"},{"location":"configure-archive-logs/#configuring-workflow-template","text":"apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] archiveLocation : archiveLogs : true","title":"Configuring Workflow Template"},{"location":"configure-artifact-repository/","text":"Configuring Your Artifact Repository \u00b6 To run Argo workflows that use artifacts, you must configure and use an artifact repository. Argo supports any S3 compatible artifact repository such as AWS, GCS and MinIO. This section shows how to configure the artifact repository. Subsequent sections will show how to use it. Name Inputs Outputs Garbage Collection Usage (Feb 2020) Artifactory Yes Yes No 11% Azure Blob Yes Yes Yes - GCS Yes Yes Yes - Git Yes No No - HDFS Yes Yes No 3% HTTP Yes Yes No 2% OSS Yes Yes No - Raw Yes No No 5% S3 Yes Yes Yes 86% The actual repository used by a workflow is chosen by the following rules: Anything explicitly configured using Artifact Repository Ref . This is the most flexible, safe, and secure option. From a config map named artifact-repositories if it has the workflows.argoproj.io/default-artifact-repository annotation in the workflow's namespace. From a workflow controller config-map. Configuring MinIO \u00b6 You can install MinIO into your cluster via Helm. First, install helm . Then, install MinIO with the below commands: helm repo add minio https://helm.min.io/ # official minio Helm charts helm repo update helm install argo-artifacts minio/minio --set service.type = LoadBalancer --set fullnameOverride = argo-artifacts Login to the MinIO UI using a web browser (port 9000) after obtaining the external IP using kubectl . kubectl get service argo-artifacts On Minikube: minikube service --url argo-artifacts NOTE: When MinIO is installed via Helm, it generates credentials, which you will use to login to the UI: Use the commands shown below to see the credentials AccessKey : kubectl get secret argo-artifacts -o jsonpath='{.data.accesskey}' | base64 --decode SecretKey : kubectl get secret argo-artifacts -o jsonpath='{.data.secretkey}' | base64 --decode Create a bucket named my-bucket from the MinIO UI. If MinIO is configured to use TLS you need to set the parameter insecure to false . Additionally, if MinIO is protected by certificates generated by a custom CA, you first need to save the CA certificate in a Kubernetes secret, then set the caSecret parameter accordingly. This will allow Argo to correctly verify the server certificate presented by MinIO. For example: kubectl create secret generic my-root-ca --from-file = my-ca.pem artifacts : - s3 : insecure : false caSecret : name : my-root-ca key : my-ca.pem ... Configuring AWS S3 \u00b6 Create your bucket and access keys for the bucket. AWS access keys have the same permissions as the user they are associated with. In particular, you cannot create access keys with reduced scope. If you want to limit the permissions for an access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. $ export mybucket = bucket249 $ cat > policy.json < access-key.json If you do not have Artifact Garbage Collection configured, you should remove s3:DeleteObject from the list of Actions above. NOTE: if you want argo to figure out which region your buckets belong in, you must additionally set the following statement policy. Otherwise, you must specify a bucket region in your workflow configuration. { \"Effect\" : \"Allow\" , \"Action\" :[ \"s3:GetBucketLocation\" ], \"Resource\" : \"arn:aws:s3:::*\" } ... AWS S3 IRSA \u00b6 If you wish to use S3 IRSA instead of passing in an accessKey and secretKey , you need to annotate the service account of both the running workflow (in order to save logs/artifacts) and the argo-server pod (in order to retrieve the logs/artifacts). apiVersion : v1 kind : ServiceAccount metadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::012345678901:role/mybucket name : myserviceaccount namespace : mynamespace Configuring GCS (Google Cloud Storage) \u00b6 Create a bucket from the GCP Console ( https://console.cloud.google.com/storage/browser ). There are 2 ways to configure a Google Cloud Storage. Through Native GCS APIs \u00b6 Create and download a Google Cloud service account key. Create a kubernetes secret to store the key. Configure gcs artifact as following in the yaml. artifacts : - name : message path : /tmp/message gcs : bucket : my-bucket-name key : path/in/bucket # serviceAccountKeySecret is a secret selector. # It references the k8s secret named 'my-gcs-credentials'. # This secret is expected to have have the key 'serviceAccountKey', # containing the base64 encoded credentials # to the bucket. # # If it's running on GKE and Workload Identity is used, # serviceAccountKeySecret is not needed. serviceAccountKeySecret : name : my-gcs-credentials key : serviceAccountKey If it's a GKE cluster, and Workload Identity is configured, there's no need to create the service account key and store it as a Kubernetes secret, serviceAccountKeySecret is also not needed in this case. Please follow the link to configure Workload Identity ( https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity ). Use S3 APIs \u00b6 Enable S3 compatible access and create an access key. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. Configure s3 artifact as following example. artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey Configuring Alibaba Cloud OSS (Object Storage Service) \u00b6 Create your bucket and access key for the bucket. Suggest to limit the permission for the access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. Setup Alibaba Cloud CLI and follow the steps to configure the artifact storage for your workflow: $ export mybucket = bucket-workflow-artifect $ export myregion = cn-zhangjiakou $ # limit permission to read/write the bucket. $ cat > policy.json < access-key.json $ # create secret in demo namespace, replace demo with your namespace. $ kubectl create secret generic $mybucket -credentials -n demo \\ --from-literal \"accessKey= $( cat access-key.json | jq -r .AccessKey.AccessKeyId ) \" \\ --from-literal \"secretKey= $( cat access-key.json | jq -r .AccessKey.AccessKeySecret ) \" $ # create configmap to config default artifact for a namespace. $ cat > default-artifact-repository.yaml << EOF apiVersion: v1 kind: ConfigMap metadata: # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name: artifact-repositories annotations: # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository: default-oss-artifact-repository data: default-oss-artifact-repository: | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket # accessKeySecret and secretKeySecret are secret selectors. # It references the k8s secret named 'bucket-workflow-artifect-credentials'. # This secret is expected to have have the keys 'accessKey' # and 'secretKey', containing the base64 encoded credentials # to the bucket. accessKeySecret: name: $mybucket-credentials key: accessKey secretKeySecret: name: $mybucket-credentials key: secretKey EOF # create cm in demo namespace, replace demo with your namespace. $ k apply -f default-artifact-repository.yaml -n demo You can also set createBucketIfNotPresent to true to tell the artifact driver to automatically create the OSS bucket if it doesn't exist yet when saving artifacts. Note that you'll need to set additional permission for your OSS account to create new buckets. Alibaba Cloud OSS RRSA \u00b6 If you wish to use OSS RRSA instead of passing in an accessKey and secretKey , you need to perform the following actions: Install pod-identity-webhook in your cluster to automatically inject the OIDC tokens and environment variables. Add the label pod-identity.alibabacloud.com/injection: 'on' to the target workflow namespace. Add the annotation pod-identity.alibabacloud.com/role-name: $your_ram_role_name to the service account of running workflow. Set useSDKCreds: true in your target artifact repository cm and remove the secret references to AK/SK. apiVersion : v1 kind : Namespace metadata : name : my-ns labels : pod-identity.alibabacloud.com/injection : 'on' --- apiVersion : v1 kind : ServiceAccount metadata : name : my-sa namespace : rrsa-demo annotations : pod-identity.alibabacloud.com/role-name : $your_ram_role_name --- apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : artifact-repositories annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-oss-artifact-repository data : default-oss-artifact-repository : | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket useSDKCreds: true Configuring Azure Blob Storage \u00b6 Create an Azure Storage account and a container within that account. There are a number of ways to accomplish this, including the Azure Portal or the CLI . Retrieve the blob service endpoint for the storage account. For example: az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv Retrieve the access key for the storage account. For example: az storage account keys list -n mystorageaccountname --query '[0].value' -otsv Create a kubernetes secret to hold the storage account key. For example: kubectl create secret generic my-azure-storage-credentials \\ --from-literal \"account-access-key= $( az storage account keys list -n mystorageaccountname --query '[0].value' -otsv ) \" Configure azure artifact as following in the yaml. artifacts : - name : message path : /tmp/message azure : endpoint : https://mystorageaccountname.blob.core.windows.net container : my-container-name blob : path/in/container # accountKeySecret is a secret selector. # It references the k8s secret named 'my-azure-storage-credentials'. # This secret is expected to have have the key 'account-access-key', # containing the base64 encoded credentials to the storage account. # # If a managed identity has been assigned to the machines running the # workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) # then accountKeySecret is not needed, and useSDKCreds should be # set to true instead: # useSDKCreds: true accountKeySecret : name : my-azure-storage-credentials key : account-access-key If useSDKCreds is set to true , then the accountKeySecret value is not used and authentication with Azure will be attempted using a DefaultAzureCredential instead. Configure the Default Artifact Repository \u00b6 In order for Argo to use your artifact repository, you can configure it as the default repository. Edit the workflow-controller config map with the correct endpoint and access/secret keys for your repository. S3 compatible artifact repository bucket (such as AWS, GCS, MinIO, and Alibaba Cloud OSS) \u00b6 Use the endpoint corresponding to your provider: AWS: s3.amazonaws.com GCS: storage.googleapis.com MinIO: my-minio-endpoint.default:9000 Alibaba Cloud OSS: oss-cn-hangzhou-zmf.aliyuncs.com The key is name of the object in the bucket The accessKeySecret and secretKeySecret are secret selectors that reference the specified kubernetes secret. The secret is expected to have the keys accessKey and secretKey , containing the base64 encoded credentials to the bucket. For AWS, the accessKeySecret and secretKeySecret correspond to AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY respectively. EC2 provides a meta-data API via which applications using the AWS SDK may assume IAM roles associated with the instance. If you are running argo on EC2 and the instance role allows access to your S3 bucket, you can configure the workflow step pods to assume the role. To do so, simply omit the accessKeySecret and secretKeySecret fields. For GCS, the accessKeySecret and secretKeySecret for S3 compatible access can be obtained from the GCP Console. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. For MinIO, the accessKeySecret and secretKeySecret naturally correspond the AccessKey and SecretKey . For Alibaba Cloud OSS, the accessKeySecret and secretKeySecret corresponds to accessKeyID and accessKeySecret respectively. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | s3: bucket: my-bucket keyFormat: prefix/in/bucket #optional endpoint: my-minio-endpoint.default:9000 #AWS => s3.amazonaws.com; GCS => storage.googleapis.com insecure: true #omit for S3/GCS. Needed when minio runs without TLS accessKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: accessKey secretKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: secretKey useSDKCreds: true #tells argo to use AWS SDK's default provider chain, enable for things like IRSA support The secrets are retrieved from the namespace you use to run your workflows. Note that you can specify a keyFormat . Google Cloud Storage (GCS) \u00b6 Argo also can use native GCS APIs to access a Google Cloud Storage bucket. serviceAccountKeySecret references to a Kubernetes secret which stores a Google Cloud service account key to access the bucket. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | gcs: bucket: my-bucket keyFormat: prefix/in/bucket/ {{ workflow.name }} / {{ pod.name }} #it should reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" serviceAccountKeySecret: name: my-gcs-credentials key: serviceAccountKey Azure Blob Storage \u00b6 Argo can use native Azure APIs to access a Azure Blob Storage container. accountKeySecret references to a Kubernetes secret which stores an Azure Blob Storage account shared key to access the container. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | azure: container: my-container blobNameFormat: prefix/in/container #optional, it could reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" accountKeySecret: name: my-azure-storage-credentials key: account-access-key Accessing Non-Default Artifact Repositories \u00b6 This section shows how to access artifacts from non-default artifact repositories. The endpoint , accessKeySecret and secretKeySecret are the same as for configuring the default artifact repository described previously. templates : - name : artifact-example inputs : artifacts : - name : my-input-artifact path : /my-input-artifact s3 : endpoint : s3.amazonaws.com bucket : my-aws-bucket-name key : path/in/bucket/my-input-artifact.tgz accessKeySecret : name : my-aws-s3-credentials key : accessKey secretKeySecret : name : my-aws-s3-credentials key : secretKey outputs : artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey region : my-GCS-storage-bucket-region container : image : debian:latest command : [ sh , -c ] args : [ \"cp -r /my-input-artifact /my-output-artifact\" ] Artifact Streaming \u00b6 With artifact streaming, artifacts don\u2019t need to be saved to disk first. Artifact streaming is only supported in the following artifact drivers: S3 (v3.4+), Azure Blob (v3.4+), HTTP (v3.5+), and Artifactory (v3.5+). Previously, when a user would click the button to download an artifact in the UI, the artifact would need to be written to the Argo Server\u2019s disk first before downloading. If many users tried to download simultaneously, they would take up disk space and fail the download.","title":"Configuring Your Artifact Repository"},{"location":"configure-artifact-repository/#configuring-your-artifact-repository","text":"To run Argo workflows that use artifacts, you must configure and use an artifact repository. Argo supports any S3 compatible artifact repository such as AWS, GCS and MinIO. This section shows how to configure the artifact repository. Subsequent sections will show how to use it. Name Inputs Outputs Garbage Collection Usage (Feb 2020) Artifactory Yes Yes No 11% Azure Blob Yes Yes Yes - GCS Yes Yes Yes - Git Yes No No - HDFS Yes Yes No 3% HTTP Yes Yes No 2% OSS Yes Yes No - Raw Yes No No 5% S3 Yes Yes Yes 86% The actual repository used by a workflow is chosen by the following rules: Anything explicitly configured using Artifact Repository Ref . This is the most flexible, safe, and secure option. From a config map named artifact-repositories if it has the workflows.argoproj.io/default-artifact-repository annotation in the workflow's namespace. From a workflow controller config-map.","title":"Configuring Your Artifact Repository"},{"location":"configure-artifact-repository/#configuring-minio","text":"You can install MinIO into your cluster via Helm. First, install helm . Then, install MinIO with the below commands: helm repo add minio https://helm.min.io/ # official minio Helm charts helm repo update helm install argo-artifacts minio/minio --set service.type = LoadBalancer --set fullnameOverride = argo-artifacts Login to the MinIO UI using a web browser (port 9000) after obtaining the external IP using kubectl . kubectl get service argo-artifacts On Minikube: minikube service --url argo-artifacts NOTE: When MinIO is installed via Helm, it generates credentials, which you will use to login to the UI: Use the commands shown below to see the credentials AccessKey : kubectl get secret argo-artifacts -o jsonpath='{.data.accesskey}' | base64 --decode SecretKey : kubectl get secret argo-artifacts -o jsonpath='{.data.secretkey}' | base64 --decode Create a bucket named my-bucket from the MinIO UI. If MinIO is configured to use TLS you need to set the parameter insecure to false . Additionally, if MinIO is protected by certificates generated by a custom CA, you first need to save the CA certificate in a Kubernetes secret, then set the caSecret parameter accordingly. This will allow Argo to correctly verify the server certificate presented by MinIO. For example: kubectl create secret generic my-root-ca --from-file = my-ca.pem artifacts : - s3 : insecure : false caSecret : name : my-root-ca key : my-ca.pem ...","title":"Configuring MinIO"},{"location":"configure-artifact-repository/#configuring-aws-s3","text":"Create your bucket and access keys for the bucket. AWS access keys have the same permissions as the user they are associated with. In particular, you cannot create access keys with reduced scope. If you want to limit the permissions for an access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. $ export mybucket = bucket249 $ cat > policy.json < access-key.json If you do not have Artifact Garbage Collection configured, you should remove s3:DeleteObject from the list of Actions above. NOTE: if you want argo to figure out which region your buckets belong in, you must additionally set the following statement policy. Otherwise, you must specify a bucket region in your workflow configuration. { \"Effect\" : \"Allow\" , \"Action\" :[ \"s3:GetBucketLocation\" ], \"Resource\" : \"arn:aws:s3:::*\" } ...","title":"Configuring AWS S3"},{"location":"configure-artifact-repository/#aws-s3-irsa","text":"If you wish to use S3 IRSA instead of passing in an accessKey and secretKey , you need to annotate the service account of both the running workflow (in order to save logs/artifacts) and the argo-server pod (in order to retrieve the logs/artifacts). apiVersion : v1 kind : ServiceAccount metadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::012345678901:role/mybucket name : myserviceaccount namespace : mynamespace","title":"AWS S3 IRSA"},{"location":"configure-artifact-repository/#configuring-gcs-google-cloud-storage","text":"Create a bucket from the GCP Console ( https://console.cloud.google.com/storage/browser ). There are 2 ways to configure a Google Cloud Storage.","title":"Configuring GCS (Google Cloud Storage)"},{"location":"configure-artifact-repository/#through-native-gcs-apis","text":"Create and download a Google Cloud service account key. Create a kubernetes secret to store the key. Configure gcs artifact as following in the yaml. artifacts : - name : message path : /tmp/message gcs : bucket : my-bucket-name key : path/in/bucket # serviceAccountKeySecret is a secret selector. # It references the k8s secret named 'my-gcs-credentials'. # This secret is expected to have have the key 'serviceAccountKey', # containing the base64 encoded credentials # to the bucket. # # If it's running on GKE and Workload Identity is used, # serviceAccountKeySecret is not needed. serviceAccountKeySecret : name : my-gcs-credentials key : serviceAccountKey If it's a GKE cluster, and Workload Identity is configured, there's no need to create the service account key and store it as a Kubernetes secret, serviceAccountKeySecret is also not needed in this case. Please follow the link to configure Workload Identity ( https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity ).","title":"Through Native GCS APIs"},{"location":"configure-artifact-repository/#use-s3-apis","text":"Enable S3 compatible access and create an access key. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. Configure s3 artifact as following example. artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey","title":"Use S3 APIs"},{"location":"configure-artifact-repository/#configuring-alibaba-cloud-oss-object-storage-service","text":"Create your bucket and access key for the bucket. Suggest to limit the permission for the access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. Setup Alibaba Cloud CLI and follow the steps to configure the artifact storage for your workflow: $ export mybucket = bucket-workflow-artifect $ export myregion = cn-zhangjiakou $ # limit permission to read/write the bucket. $ cat > policy.json < access-key.json $ # create secret in demo namespace, replace demo with your namespace. $ kubectl create secret generic $mybucket -credentials -n demo \\ --from-literal \"accessKey= $( cat access-key.json | jq -r .AccessKey.AccessKeyId ) \" \\ --from-literal \"secretKey= $( cat access-key.json | jq -r .AccessKey.AccessKeySecret ) \" $ # create configmap to config default artifact for a namespace. $ cat > default-artifact-repository.yaml << EOF apiVersion: v1 kind: ConfigMap metadata: # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name: artifact-repositories annotations: # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository: default-oss-artifact-repository data: default-oss-artifact-repository: | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket # accessKeySecret and secretKeySecret are secret selectors. # It references the k8s secret named 'bucket-workflow-artifect-credentials'. # This secret is expected to have have the keys 'accessKey' # and 'secretKey', containing the base64 encoded credentials # to the bucket. accessKeySecret: name: $mybucket-credentials key: accessKey secretKeySecret: name: $mybucket-credentials key: secretKey EOF # create cm in demo namespace, replace demo with your namespace. $ k apply -f default-artifact-repository.yaml -n demo You can also set createBucketIfNotPresent to true to tell the artifact driver to automatically create the OSS bucket if it doesn't exist yet when saving artifacts. Note that you'll need to set additional permission for your OSS account to create new buckets.","title":"Configuring Alibaba Cloud OSS (Object Storage Service)"},{"location":"configure-artifact-repository/#alibaba-cloud-oss-rrsa","text":"If you wish to use OSS RRSA instead of passing in an accessKey and secretKey , you need to perform the following actions: Install pod-identity-webhook in your cluster to automatically inject the OIDC tokens and environment variables. Add the label pod-identity.alibabacloud.com/injection: 'on' to the target workflow namespace. Add the annotation pod-identity.alibabacloud.com/role-name: $your_ram_role_name to the service account of running workflow. Set useSDKCreds: true in your target artifact repository cm and remove the secret references to AK/SK. apiVersion : v1 kind : Namespace metadata : name : my-ns labels : pod-identity.alibabacloud.com/injection : 'on' --- apiVersion : v1 kind : ServiceAccount metadata : name : my-sa namespace : rrsa-demo annotations : pod-identity.alibabacloud.com/role-name : $your_ram_role_name --- apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : artifact-repositories annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-oss-artifact-repository data : default-oss-artifact-repository : | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket useSDKCreds: true","title":"Alibaba Cloud OSS RRSA"},{"location":"configure-artifact-repository/#configuring-azure-blob-storage","text":"Create an Azure Storage account and a container within that account. There are a number of ways to accomplish this, including the Azure Portal or the CLI . Retrieve the blob service endpoint for the storage account. For example: az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv Retrieve the access key for the storage account. For example: az storage account keys list -n mystorageaccountname --query '[0].value' -otsv Create a kubernetes secret to hold the storage account key. For example: kubectl create secret generic my-azure-storage-credentials \\ --from-literal \"account-access-key= $( az storage account keys list -n mystorageaccountname --query '[0].value' -otsv ) \" Configure azure artifact as following in the yaml. artifacts : - name : message path : /tmp/message azure : endpoint : https://mystorageaccountname.blob.core.windows.net container : my-container-name blob : path/in/container # accountKeySecret is a secret selector. # It references the k8s secret named 'my-azure-storage-credentials'. # This secret is expected to have have the key 'account-access-key', # containing the base64 encoded credentials to the storage account. # # If a managed identity has been assigned to the machines running the # workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) # then accountKeySecret is not needed, and useSDKCreds should be # set to true instead: # useSDKCreds: true accountKeySecret : name : my-azure-storage-credentials key : account-access-key If useSDKCreds is set to true , then the accountKeySecret value is not used and authentication with Azure will be attempted using a DefaultAzureCredential instead.","title":"Configuring Azure Blob Storage"},{"location":"configure-artifact-repository/#configure-the-default-artifact-repository","text":"In order for Argo to use your artifact repository, you can configure it as the default repository. Edit the workflow-controller config map with the correct endpoint and access/secret keys for your repository.","title":"Configure the Default Artifact Repository"},{"location":"configure-artifact-repository/#s3-compatible-artifact-repository-bucket-such-as-aws-gcs-minio-and-alibaba-cloud-oss","text":"Use the endpoint corresponding to your provider: AWS: s3.amazonaws.com GCS: storage.googleapis.com MinIO: my-minio-endpoint.default:9000 Alibaba Cloud OSS: oss-cn-hangzhou-zmf.aliyuncs.com The key is name of the object in the bucket The accessKeySecret and secretKeySecret are secret selectors that reference the specified kubernetes secret. The secret is expected to have the keys accessKey and secretKey , containing the base64 encoded credentials to the bucket. For AWS, the accessKeySecret and secretKeySecret correspond to AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY respectively. EC2 provides a meta-data API via which applications using the AWS SDK may assume IAM roles associated with the instance. If you are running argo on EC2 and the instance role allows access to your S3 bucket, you can configure the workflow step pods to assume the role. To do so, simply omit the accessKeySecret and secretKeySecret fields. For GCS, the accessKeySecret and secretKeySecret for S3 compatible access can be obtained from the GCP Console. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. For MinIO, the accessKeySecret and secretKeySecret naturally correspond the AccessKey and SecretKey . For Alibaba Cloud OSS, the accessKeySecret and secretKeySecret corresponds to accessKeyID and accessKeySecret respectively. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | s3: bucket: my-bucket keyFormat: prefix/in/bucket #optional endpoint: my-minio-endpoint.default:9000 #AWS => s3.amazonaws.com; GCS => storage.googleapis.com insecure: true #omit for S3/GCS. Needed when minio runs without TLS accessKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: accessKey secretKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: secretKey useSDKCreds: true #tells argo to use AWS SDK's default provider chain, enable for things like IRSA support The secrets are retrieved from the namespace you use to run your workflows. Note that you can specify a keyFormat .","title":"S3 compatible artifact repository bucket (such as AWS, GCS, MinIO, and Alibaba Cloud OSS)"},{"location":"configure-artifact-repository/#google-cloud-storage-gcs","text":"Argo also can use native GCS APIs to access a Google Cloud Storage bucket. serviceAccountKeySecret references to a Kubernetes secret which stores a Google Cloud service account key to access the bucket. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | gcs: bucket: my-bucket keyFormat: prefix/in/bucket/ {{ workflow.name }} / {{ pod.name }} #it should reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" serviceAccountKeySecret: name: my-gcs-credentials key: serviceAccountKey","title":"Google Cloud Storage (GCS)"},{"location":"configure-artifact-repository/#azure-blob-storage","text":"Argo can use native Azure APIs to access a Azure Blob Storage container. accountKeySecret references to a Kubernetes secret which stores an Azure Blob Storage account shared key to access the container. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | azure: container: my-container blobNameFormat: prefix/in/container #optional, it could reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" accountKeySecret: name: my-azure-storage-credentials key: account-access-key","title":"Azure Blob Storage"},{"location":"configure-artifact-repository/#accessing-non-default-artifact-repositories","text":"This section shows how to access artifacts from non-default artifact repositories. The endpoint , accessKeySecret and secretKeySecret are the same as for configuring the default artifact repository described previously. templates : - name : artifact-example inputs : artifacts : - name : my-input-artifact path : /my-input-artifact s3 : endpoint : s3.amazonaws.com bucket : my-aws-bucket-name key : path/in/bucket/my-input-artifact.tgz accessKeySecret : name : my-aws-s3-credentials key : accessKey secretKeySecret : name : my-aws-s3-credentials key : secretKey outputs : artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey region : my-GCS-storage-bucket-region container : image : debian:latest command : [ sh , -c ] args : [ \"cp -r /my-input-artifact /my-output-artifact\" ]","title":"Accessing Non-Default Artifact Repositories"},{"location":"configure-artifact-repository/#artifact-streaming","text":"With artifact streaming, artifacts don\u2019t need to be saved to disk first. Artifact streaming is only supported in the following artifact drivers: S3 (v3.4+), Azure Blob (v3.4+), HTTP (v3.5+), and Artifactory (v3.5+). Previously, when a user would click the button to download an artifact in the UI, the artifact would need to be written to the Argo Server\u2019s disk first before downloading. If many users tried to download simultaneously, they would take up disk space and fail the download.","title":"Artifact Streaming"},{"location":"container-set-template/","text":"Container Set Template \u00b6 v3.1 and after A container set templates is similar to a normal container or script template, but allows you to specify multiple containers to run within a single pod. Because you have multiple containers within a pod, they will be scheduled on the same host. You can use cheap and fast empty-dir volumes instead of persistent volume claims to share data between steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : container-set-template- spec : entrypoint : main templates : - name : main volumes : - name : workspace emptyDir : { } containerSet : volumeMounts : - mountPath : /workspace name : workspace containers : - name : a image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'a: hello world' >> /workspace/message\" ] - name : b image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'b: hello world' >> /workspace/message\" ] - name : main image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'main: hello world' >> /workspace/message\" ] dependencies : - a - b outputs : parameters : - name : message valueFrom : path : /workspace/message There are a couple of caveats: You must use the Emissary Executor . Or all containers must run in parallel - i.e. it is a graph with no dependencies. You cannot use enhanced depends logic . It will use the sum total of all resource requests, maybe costing more than the same DAG template. This will be a problem if your requests already cost a lot. See below. The containers can be arranged as a graph by specifying dependencies. This is suitable for running 10s rather than 100s of containers. Inputs and Outputs \u00b6 As with the container and script templates, inputs and outputs can only be loaded and saved from a container named main . All container set templates that have artifacts must/should have a container named main . If you want to use base-layer artifacts, main must be last to finish, so it must be the root node in the graph. That is may not be practical. Instead, have a workspace volume and make sure all artifacts paths are on that volume. \u26a0\ufe0f Resource Requests \u00b6 A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consuming resources and you're still getting billed for them. If your requests are small, this won't be a problem. If your requests are large, set the resource requests so the sum total is the most you'll need at once. Example A: a simple sequence e.g. a -> b -> c a needs 1Gi memory b needs 2Gi memory c needs 1Gi memory Then you know you need only a maximum of 2Gi. You could set as follows: a requests 512Mi memory b requests 1Gi memory c requests 512Mi memory The total is 2Gi, which is enough for b . We're all good. Example B: Diamond DAG e.g. a diamond a -> b -> d and a -> c -> d , i.e. b and c run at the same time. a needs 1000 cpu b needs 2000 cpu c needs 1000 cpu d needs 1000 cpu I know that b and c will run at the same time. So I need to make sure the total is 3000. a requests 500 cpu b requests 1000 cpu c requests 1000 cpu d requests 500 cpu The total is 3000, which is enough for b + c . We're all good. Example B: Lopsided requests, e.g. a -> b where a is cheap and b is expensive a needs 100 cpu, 1Mi memory, runs for 10h b needs 8Ki GPU, 100 Gi memory, 200 Ki GPU, runs for 5m Can you see the problem here? a only has small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive. Solution: do not use container set when you have lopsided requests.","title":"Container Set Template"},{"location":"container-set-template/#container-set-template","text":"v3.1 and after A container set templates is similar to a normal container or script template, but allows you to specify multiple containers to run within a single pod. Because you have multiple containers within a pod, they will be scheduled on the same host. You can use cheap and fast empty-dir volumes instead of persistent volume claims to share data between steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : container-set-template- spec : entrypoint : main templates : - name : main volumes : - name : workspace emptyDir : { } containerSet : volumeMounts : - mountPath : /workspace name : workspace containers : - name : a image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'a: hello world' >> /workspace/message\" ] - name : b image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'b: hello world' >> /workspace/message\" ] - name : main image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'main: hello world' >> /workspace/message\" ] dependencies : - a - b outputs : parameters : - name : message valueFrom : path : /workspace/message There are a couple of caveats: You must use the Emissary Executor . Or all containers must run in parallel - i.e. it is a graph with no dependencies. You cannot use enhanced depends logic . It will use the sum total of all resource requests, maybe costing more than the same DAG template. This will be a problem if your requests already cost a lot. See below. The containers can be arranged as a graph by specifying dependencies. This is suitable for running 10s rather than 100s of containers.","title":"Container Set Template"},{"location":"container-set-template/#inputs-and-outputs","text":"As with the container and script templates, inputs and outputs can only be loaded and saved from a container named main . All container set templates that have artifacts must/should have a container named main . If you want to use base-layer artifacts, main must be last to finish, so it must be the root node in the graph. That is may not be practical. Instead, have a workspace volume and make sure all artifacts paths are on that volume.","title":"Inputs and Outputs"},{"location":"container-set-template/#resource-requests","text":"A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consuming resources and you're still getting billed for them. If your requests are small, this won't be a problem. If your requests are large, set the resource requests so the sum total is the most you'll need at once. Example A: a simple sequence e.g. a -> b -> c a needs 1Gi memory b needs 2Gi memory c needs 1Gi memory Then you know you need only a maximum of 2Gi. You could set as follows: a requests 512Mi memory b requests 1Gi memory c requests 512Mi memory The total is 2Gi, which is enough for b . We're all good. Example B: Diamond DAG e.g. a diamond a -> b -> d and a -> c -> d , i.e. b and c run at the same time. a needs 1000 cpu b needs 2000 cpu c needs 1000 cpu d needs 1000 cpu I know that b and c will run at the same time. So I need to make sure the total is 3000. a requests 500 cpu b requests 1000 cpu c requests 1000 cpu d requests 500 cpu The total is 3000, which is enough for b + c . We're all good. Example B: Lopsided requests, e.g. a -> b where a is cheap and b is expensive a needs 100 cpu, 1Mi memory, runs for 10h b needs 8Ki GPU, 100 Gi memory, 200 Ki GPU, runs for 5m Can you see the problem here? a only has small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive. Solution: do not use container set when you have lopsided requests.","title":"\u26a0\ufe0f Resource Requests"},{"location":"cost-optimisation/","text":"Cost Optimization \u00b6 User Cost Optimizations \u00b6 Suggestions for users running workflows. Set The Workflows Pod Resource Requests \u00b6 Suitable if you are running a workflow with many homogeneous pods. Resource duration shows the amount of CPU and memory requested by a pod and is indicative of the cost. You can use this to find costly steps within your workflow. Smaller requests can be set in the pod spec patch's resource requirements . Use A Node Selector To Use Cheaper Instances \u00b6 You can use a node selector for cheaper instances, e.g. spot instances: nodeSelector : \"node-role.kubernetes.io/argo-spot-worker\" : \"true\" Consider trying Volume Claim Templates or Volumes instead of Artifacts \u00b6 Suitable if you have a workflow that passes a lot of artifacts within itself. Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on what your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage). On the other hand if you are using a NFS volume shared between all your workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage. Consider: Data transfer costs (upload/download vs. copying) Data storage costs (object storage vs. volume) Requirement for parallel access to data (NFS vs. block storage vs. artifact) When using volume claims, consider configuring Volume Claim GC . By default, claims are only deleted when a workflow is successful. Limit The Total Number Of Workflows And Pods \u00b6 Suitable for all. A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as it exists in your cluster, even after it's no longer running. The workflow controller memory and CPU needs to increase linearly with the number of pods and workflows you are currently running. You should delete workflows once they are no longer needed. You can enable the Workflow Archive to continue viewing them after they are removed from Kubernetes. Limit the total number of workflows using: Active Deadline Seconds - terminate running workflows that do not complete in a set time. This will make sure workflows do not run forever. Workflow TTL Strategy - delete completed workflows after a set time. Pod GC - delete completed pods. By default, Pods are not deleted. CronWorkflow history limits - delete successful or failed workflows which exceed the limit. Example spec : # must complete in 8h (28,800 seconds) activeDeadlineSeconds : 28800 # keep workflows for 1d (86,400 seconds) ttlStrategy : secondsAfterCompletion : 86400 # delete all pods as soon as they complete podGC : strategy : OnPodCompletion You can set these configurations globally using Default Workflow Spec . Changing these settings will not delete workflows that have already run. To list old workflows: argo list --completed --since 7d v2.9 and after To list/delete workflows completed over 7 days ago: argo list --older 7d argo delete --older 7d Operator Cost Optimizations \u00b6 Suggestions for operators who installed Argo Workflows. Set Resources Requests and Limits \u00b6 Suitable if you have many instances, e.g. on dozens of clusters or namespaces. Set resource requests and limits for the workflow-controller and argo-server , e.g. requests : cpu : 100m memory : 64Mi limits : cpu : 500m memory : 128Mi This above limit is suitable for the Argo Server, as this is stateless. The Workflow Controller is stateful and will scale to the number of live workflows - so you are likely to need higher values. Configure Executor Resource Requests \u00b6 Suitable for all - unless you have large artifacts. Configure workflow-controller-configmap.yaml to set the executor.resources : executor : | resources: requests: cpu: 100m memory: 64Mi limits: cpu: 500m memory: 512Mi The correct values depend on the size of artifacts your workflows download. For artifacts > 10GB, memory usage may be large - #1322 .","title":"Cost Optimization"},{"location":"cost-optimisation/#cost-optimization","text":"","title":"Cost Optimization"},{"location":"cost-optimisation/#user-cost-optimizations","text":"Suggestions for users running workflows.","title":"User Cost Optimizations"},{"location":"cost-optimisation/#set-the-workflows-pod-resource-requests","text":"Suitable if you are running a workflow with many homogeneous pods. Resource duration shows the amount of CPU and memory requested by a pod and is indicative of the cost. You can use this to find costly steps within your workflow. Smaller requests can be set in the pod spec patch's resource requirements .","title":"Set The Workflows Pod Resource Requests"},{"location":"cost-optimisation/#use-a-node-selector-to-use-cheaper-instances","text":"You can use a node selector for cheaper instances, e.g. spot instances: nodeSelector : \"node-role.kubernetes.io/argo-spot-worker\" : \"true\"","title":"Use A Node Selector To Use Cheaper Instances"},{"location":"cost-optimisation/#consider-trying-volume-claim-templates-or-volumes-instead-of-artifacts","text":"Suitable if you have a workflow that passes a lot of artifacts within itself. Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on what your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage). On the other hand if you are using a NFS volume shared between all your workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage. Consider: Data transfer costs (upload/download vs. copying) Data storage costs (object storage vs. volume) Requirement for parallel access to data (NFS vs. block storage vs. artifact) When using volume claims, consider configuring Volume Claim GC . By default, claims are only deleted when a workflow is successful.","title":"Consider trying Volume Claim Templates or Volumes instead of Artifacts"},{"location":"cost-optimisation/#limit-the-total-number-of-workflows-and-pods","text":"Suitable for all. A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as it exists in your cluster, even after it's no longer running. The workflow controller memory and CPU needs to increase linearly with the number of pods and workflows you are currently running. You should delete workflows once they are no longer needed. You can enable the Workflow Archive to continue viewing them after they are removed from Kubernetes. Limit the total number of workflows using: Active Deadline Seconds - terminate running workflows that do not complete in a set time. This will make sure workflows do not run forever. Workflow TTL Strategy - delete completed workflows after a set time. Pod GC - delete completed pods. By default, Pods are not deleted. CronWorkflow history limits - delete successful or failed workflows which exceed the limit. Example spec : # must complete in 8h (28,800 seconds) activeDeadlineSeconds : 28800 # keep workflows for 1d (86,400 seconds) ttlStrategy : secondsAfterCompletion : 86400 # delete all pods as soon as they complete podGC : strategy : OnPodCompletion You can set these configurations globally using Default Workflow Spec . Changing these settings will not delete workflows that have already run. To list old workflows: argo list --completed --since 7d v2.9 and after To list/delete workflows completed over 7 days ago: argo list --older 7d argo delete --older 7d","title":"Limit The Total Number Of Workflows And Pods"},{"location":"cost-optimisation/#operator-cost-optimizations","text":"Suggestions for operators who installed Argo Workflows.","title":"Operator Cost Optimizations"},{"location":"cost-optimisation/#set-resources-requests-and-limits","text":"Suitable if you have many instances, e.g. on dozens of clusters or namespaces. Set resource requests and limits for the workflow-controller and argo-server , e.g. requests : cpu : 100m memory : 64Mi limits : cpu : 500m memory : 128Mi This above limit is suitable for the Argo Server, as this is stateless. The Workflow Controller is stateful and will scale to the number of live workflows - so you are likely to need higher values.","title":"Set Resources Requests and Limits"},{"location":"cost-optimisation/#configure-executor-resource-requests","text":"Suitable for all - unless you have large artifacts. Configure workflow-controller-configmap.yaml to set the executor.resources : executor : | resources: requests: cpu: 100m memory: 64Mi limits: cpu: 500m memory: 512Mi The correct values depend on the size of artifacts your workflows download. For artifacts > 10GB, memory usage may be large - #1322 .","title":"Configure Executor Resource Requests"},{"location":"cron-backfill/","text":"Cron Backfill \u00b6 Use Case \u00b6 You are using cron workflows to run daily jobs, you may need to re-run for a date, or run some historical days. Solution \u00b6 Create a workflow template for your daily job. Create your cron workflow to run daily and invoke that template. Create a backfill workflow that uses withSequence to run the job for each date. This full example contains: A workflow template named job . A cron workflow named daily-job . A workflow named backfill-v1 that uses a resource template to create one workflow for each backfill date. A alternative workflow named backfill-v2 that uses a steps templates to run one task for each backfill date.","title":"Cron Backfill"},{"location":"cron-backfill/#cron-backfill","text":"","title":"Cron Backfill"},{"location":"cron-backfill/#use-case","text":"You are using cron workflows to run daily jobs, you may need to re-run for a date, or run some historical days.","title":"Use Case"},{"location":"cron-backfill/#solution","text":"Create a workflow template for your daily job. Create your cron workflow to run daily and invoke that template. Create a backfill workflow that uses withSequence to run the job for each date. This full example contains: A workflow template named job . A cron workflow named daily-job . A workflow named backfill-v1 that uses a resource template to create one workflow for each backfill date. A alternative workflow named backfill-v2 that uses a steps templates to run one task for each backfill date.","title":"Solution"},{"location":"cron-workflows/","text":"Cron Workflows \u00b6 v2.5 and after Introduction \u00b6 CronWorkflow are workflows that run on a preset schedule. They are designed to be converted from Workflow easily and to mimic the same options as Kubernetes CronJob . In essence, CronWorkflow = Workflow + some specific cron options. CronWorkflow Spec \u00b6 An example CronWorkflow spec would look like: apiVersion : argoproj.io/v1alpha1 kind : CronWorkflow metadata : name : test-cron-wf spec : schedule : \"* * * * *\" concurrencyPolicy : \"Replace\" startingDeadlineSeconds : 0 workflowSpec : entrypoint : whalesay templates : - name : whalesay container : image : alpine:3.6 command : [ sh , -c ] args : [ \"date; sleep 90\" ] workflowSpec and workflowMetadata \u00b6 CronWorkflow.spec.workflowSpec is the same type as Workflow.spec and serves as a template for Workflow objects that are created from it. Everything under this spec will be converted to a Workflow . The resulting Workflow name will be a generated name based on the CronWorkflow name. In this example it could be something like test-cron-wf-tj6fe . CronWorkflow.spec.workflowMetadata can be used to add labels and annotations . CronWorkflow Options \u00b6 Option Name Default Value Description schedule None, must be provided Schedule at which the Workflow will be run. E.g. 5 4 * * * timezone Machine timezone Timezone during which the Workflow will be run from the IANA timezone standard, e.g. America/Los_Angeles suspend false If true Workflow scheduling will not occur. Can be set from the CLI, GitOps, or directly concurrencyPolicy Allow Policy that determines what to do if multiple Workflows are scheduled at the same time. Available options: Allow : allow all, Replace : remove all old before scheduling a new, Forbid : do not allow any new while there are old startingDeadlineSeconds 0 Number of seconds after the last successful run during which a missed Workflow will be run successfulJobsHistoryLimit 3 Number of successful Workflows that will be persisted at a time failedJobsHistoryLimit 1 Number of failed Workflows that will be persisted at a time Cron Schedule Syntax \u00b6 The cron scheduler uses the standard cron syntax, as documented on Wikipedia . More detailed documentation for the specific library used is documented here . Crash Recovery \u00b6 If the workflow-controller crashes (and hence the CronWorkflow controller), there are some options you can set to ensure that CronWorkflows that would have been scheduled while the controller was down can still run. Mainly startingDeadlineSeconds can be set to specify the maximum number of seconds past the last successful run of a CronWorkflow during which a missed run will still be executed. For example, if a CronWorkflow that runs every minute is last run at 12:05:00, and the controller crashes between 12:05:55 and 12:06:05, then the expected execution time of 12:06:00 would be missed. However, if startingDeadlineSeconds is set to a value greater than 65 (the amount of time passing between the last scheduled run time of 12:05:00 and the current controller restart time of 12:06:05), then a single instance of the CronWorkflow will be executed exactly at 12:06:05. Currently only a single instance will be executed as a result of setting startingDeadlineSeconds . This setting can also be configured in tandem with concurrencyPolicy to achieve more fine-tuned control. Daylight Saving \u00b6 Daylight Saving (DST) is taken into account when using timezone. This means that, depending on the local time of the scheduled job, argo will schedule the workflow once, twice, or not at all when the clock moves forward or back. For example, with timezone set at America/Los_Angeles , we have daylight saving +1 hour (DST start) at 2020-03-08 02:00:00: Note: The schedules between 02:00 a.m. to 02:59 a.m. were skipped on Mar 8th due to the clock being moved forward: cron sequence workflow execution time 59 1 ** * 1 2020-03-08 01:59:00 -0800 PST 2 2020-03-09 01:59:00 -0700 PDT 3 2020-03-10 01:59:00 -0700 PDT 0 2 ** * 1 2020-03-09 02:00:00 -0700 PDT 2 2020-03-10 02:00:00 -0700 PDT 3 2020-03-11 02:00:00 -0700 PDT 1 2 ** * 1 2020-03-09 02:01:00 -0700 PDT 2 2020-03-10 02:01:00 -0700 PDT 3 2020-03-11 02:01:00 -0700 PDT -1 hour (DST end) at 2020-11-01 02:00:00: Note: the schedules between 01:00 a.m. to 01:59 a.m. were triggered twice on Nov 1st due to the clock being set back: cron sequence workflow execution time 59 1 ** * 1 2020-11-01 01:59:00 -0700 PDT 2 2020-11-01 01:59:00 -0800 PST 3 2020-11-02 01:59:00 -0800 PST 0 2 ** * 1 2020-11-01 02:00:00 -0800 PST 2 2020-11-02 02:00:00 -0800 PST 3 2020-11-03 02:00:00 -0800 PST 1 2 ** * 1 2020-11-01 02:01:00 -0800 PST 2 2020-11-02 02:01:00 -0800 PST 3 2020-11-03 02:01:00 -0800 PST Managing CronWorkflow \u00b6 CLI \u00b6 CronWorkflow can be created from the CLI by using basic commands: $ argo cron create cron.yaml Name: test-cron-wf Namespace: argo Created: Mon Nov 18 10 :17:06 -0800 ( now ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Forbid $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 49s N/A * * * * * false # some time passes $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 56s 2s * * * * * false $ argo cron get test-cron-wf Name: test-cron-wf Namespace: argo Created: Wed Oct 28 07 :19:02 -0600 ( 23 hours ago ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Replace LastScheduledTime: Thu Oct 29 06 :51:00 -0600 ( 11 minutes ago ) NextScheduledTime: Thu Oct 29 13 :03:00 +0000 ( 32 seconds from now ) Active Workflows: test-cron-wf-rt4nf Note : NextScheduledRun assumes that the workflow-controller uses UTC as its timezone kubectl \u00b6 Using kubectl apply -f and kubectl get cwf Back-Filling Days \u00b6 See cron backfill . GitOps via Argo CD \u00b6 CronWorkflow resources can be managed with GitOps by using Argo CD UI \u00b6 CronWorkflow resources can also be managed by the UI","title":"Cron Workflows"},{"location":"cron-workflows/#cron-workflows","text":"v2.5 and after","title":"Cron Workflows"},{"location":"cron-workflows/#introduction","text":"CronWorkflow are workflows that run on a preset schedule. They are designed to be converted from Workflow easily and to mimic the same options as Kubernetes CronJob . In essence, CronWorkflow = Workflow + some specific cron options.","title":"Introduction"},{"location":"cron-workflows/#cronworkflow-spec","text":"An example CronWorkflow spec would look like: apiVersion : argoproj.io/v1alpha1 kind : CronWorkflow metadata : name : test-cron-wf spec : schedule : \"* * * * *\" concurrencyPolicy : \"Replace\" startingDeadlineSeconds : 0 workflowSpec : entrypoint : whalesay templates : - name : whalesay container : image : alpine:3.6 command : [ sh , -c ] args : [ \"date; sleep 90\" ]","title":"CronWorkflow Spec"},{"location":"cron-workflows/#workflowspec-and-workflowmetadata","text":"CronWorkflow.spec.workflowSpec is the same type as Workflow.spec and serves as a template for Workflow objects that are created from it. Everything under this spec will be converted to a Workflow . The resulting Workflow name will be a generated name based on the CronWorkflow name. In this example it could be something like test-cron-wf-tj6fe . CronWorkflow.spec.workflowMetadata can be used to add labels and annotations .","title":"workflowSpec and workflowMetadata"},{"location":"cron-workflows/#cronworkflow-options","text":"Option Name Default Value Description schedule None, must be provided Schedule at which the Workflow will be run. E.g. 5 4 * * * timezone Machine timezone Timezone during which the Workflow will be run from the IANA timezone standard, e.g. America/Los_Angeles suspend false If true Workflow scheduling will not occur. Can be set from the CLI, GitOps, or directly concurrencyPolicy Allow Policy that determines what to do if multiple Workflows are scheduled at the same time. Available options: Allow : allow all, Replace : remove all old before scheduling a new, Forbid : do not allow any new while there are old startingDeadlineSeconds 0 Number of seconds after the last successful run during which a missed Workflow will be run successfulJobsHistoryLimit 3 Number of successful Workflows that will be persisted at a time failedJobsHistoryLimit 1 Number of failed Workflows that will be persisted at a time","title":"CronWorkflow Options"},{"location":"cron-workflows/#cron-schedule-syntax","text":"The cron scheduler uses the standard cron syntax, as documented on Wikipedia . More detailed documentation for the specific library used is documented here .","title":"Cron Schedule Syntax"},{"location":"cron-workflows/#crash-recovery","text":"If the workflow-controller crashes (and hence the CronWorkflow controller), there are some options you can set to ensure that CronWorkflows that would have been scheduled while the controller was down can still run. Mainly startingDeadlineSeconds can be set to specify the maximum number of seconds past the last successful run of a CronWorkflow during which a missed run will still be executed. For example, if a CronWorkflow that runs every minute is last run at 12:05:00, and the controller crashes between 12:05:55 and 12:06:05, then the expected execution time of 12:06:00 would be missed. However, if startingDeadlineSeconds is set to a value greater than 65 (the amount of time passing between the last scheduled run time of 12:05:00 and the current controller restart time of 12:06:05), then a single instance of the CronWorkflow will be executed exactly at 12:06:05. Currently only a single instance will be executed as a result of setting startingDeadlineSeconds . This setting can also be configured in tandem with concurrencyPolicy to achieve more fine-tuned control.","title":"Crash Recovery"},{"location":"cron-workflows/#daylight-saving","text":"Daylight Saving (DST) is taken into account when using timezone. This means that, depending on the local time of the scheduled job, argo will schedule the workflow once, twice, or not at all when the clock moves forward or back. For example, with timezone set at America/Los_Angeles , we have daylight saving +1 hour (DST start) at 2020-03-08 02:00:00: Note: The schedules between 02:00 a.m. to 02:59 a.m. were skipped on Mar 8th due to the clock being moved forward: cron sequence workflow execution time 59 1 ** * 1 2020-03-08 01:59:00 -0800 PST 2 2020-03-09 01:59:00 -0700 PDT 3 2020-03-10 01:59:00 -0700 PDT 0 2 ** * 1 2020-03-09 02:00:00 -0700 PDT 2 2020-03-10 02:00:00 -0700 PDT 3 2020-03-11 02:00:00 -0700 PDT 1 2 ** * 1 2020-03-09 02:01:00 -0700 PDT 2 2020-03-10 02:01:00 -0700 PDT 3 2020-03-11 02:01:00 -0700 PDT -1 hour (DST end) at 2020-11-01 02:00:00: Note: the schedules between 01:00 a.m. to 01:59 a.m. were triggered twice on Nov 1st due to the clock being set back: cron sequence workflow execution time 59 1 ** * 1 2020-11-01 01:59:00 -0700 PDT 2 2020-11-01 01:59:00 -0800 PST 3 2020-11-02 01:59:00 -0800 PST 0 2 ** * 1 2020-11-01 02:00:00 -0800 PST 2 2020-11-02 02:00:00 -0800 PST 3 2020-11-03 02:00:00 -0800 PST 1 2 ** * 1 2020-11-01 02:01:00 -0800 PST 2 2020-11-02 02:01:00 -0800 PST 3 2020-11-03 02:01:00 -0800 PST","title":"Daylight Saving"},{"location":"cron-workflows/#managing-cronworkflow","text":"","title":"Managing CronWorkflow"},{"location":"cron-workflows/#cli","text":"CronWorkflow can be created from the CLI by using basic commands: $ argo cron create cron.yaml Name: test-cron-wf Namespace: argo Created: Mon Nov 18 10 :17:06 -0800 ( now ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Forbid $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 49s N/A * * * * * false # some time passes $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 56s 2s * * * * * false $ argo cron get test-cron-wf Name: test-cron-wf Namespace: argo Created: Wed Oct 28 07 :19:02 -0600 ( 23 hours ago ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Replace LastScheduledTime: Thu Oct 29 06 :51:00 -0600 ( 11 minutes ago ) NextScheduledTime: Thu Oct 29 13 :03:00 +0000 ( 32 seconds from now ) Active Workflows: test-cron-wf-rt4nf Note : NextScheduledRun assumes that the workflow-controller uses UTC as its timezone","title":"CLI"},{"location":"cron-workflows/#kubectl","text":"Using kubectl apply -f and kubectl get cwf","title":"kubectl"},{"location":"cron-workflows/#back-filling-days","text":"See cron backfill .","title":"Back-Filling Days"},{"location":"cron-workflows/#gitops-via-argo-cd","text":"CronWorkflow resources can be managed with GitOps by using Argo CD","title":"GitOps via Argo CD"},{"location":"cron-workflows/#ui","text":"CronWorkflow resources can also be managed by the UI","title":"UI"},{"location":"data-sourcing-and-transformation/","text":"Data Sourcing and Transformations \u00b6 v3.1 and after We have intentionally made this feature available with only bare-bones functionality. Our hope is that we are able to build this feature with our community's feedback. If you have ideas and use cases for this feature, please open an enhancement proposal on GitHub. Additionally, please take a look at our current ideas at the bottom of this document. Introduction \u00b6 Users often source and transform data as part of their workflows. The data template provides first-class support for these common operations. data templates can best be understood by looking at a common data sourcing and transformation operation in bash : find -r . | grep \".pdf\" | sed \"s/foo/foo.ready/\" Such operations consist of two main parts: A \"source\" of data: find -r . A series of \"transformations\" which transform the output of the source serially: | grep \".pdf\" | sed \"s/foo/foo.ready/\" This operation, for example, could be useful in sourcing a potential list of files to be processed and filtering and manipulating the list as desired. In Argo, this operation would be written as: - name : generate-artifacts data : source : # Define a source for the data, only a single \"source\" is permitted artifactPaths : # A predefined source: Generate a list of all artifact paths in a given repository s3 : # Source from an S3 bucket bucket : test endpoint : minio:9000 insecure : true accessKeySecret : name : my-minio-cred key : accesskey secretKeySecret : name : my-minio-cred key : secretkey transformation : # The source is then passed to be transformed by transformations defined here - expression : \"filter(data, {# endsWith \\\".pdf\\\"})\" - expression : \"map(data, {# + \\\".ready\\\"})\" Spec \u00b6 A data template must always contain a source . Current available sources: artifactPaths : generates a list of artifact paths from the artifact repository specified A data template may contain any number of transformations (or zero). The transformations will be applied serially in order. Current available transformations: expression : an expr expression. See language definition here . When defining expr expressions Argo will pass the available data to the environment as a variable called data (see example above). We understand that the expression transformation is limited. We intend to greatly expand the functionality of this template with our community's feedback. Please see the link at the top of this document to submit ideas or use cases for this feature.","title":"Data Sourcing and Transformations"},{"location":"data-sourcing-and-transformation/#data-sourcing-and-transformations","text":"v3.1 and after We have intentionally made this feature available with only bare-bones functionality. Our hope is that we are able to build this feature with our community's feedback. If you have ideas and use cases for this feature, please open an enhancement proposal on GitHub. Additionally, please take a look at our current ideas at the bottom of this document.","title":"Data Sourcing and Transformations"},{"location":"data-sourcing-and-transformation/#introduction","text":"Users often source and transform data as part of their workflows. The data template provides first-class support for these common operations. data templates can best be understood by looking at a common data sourcing and transformation operation in bash : find -r . | grep \".pdf\" | sed \"s/foo/foo.ready/\" Such operations consist of two main parts: A \"source\" of data: find -r . A series of \"transformations\" which transform the output of the source serially: | grep \".pdf\" | sed \"s/foo/foo.ready/\" This operation, for example, could be useful in sourcing a potential list of files to be processed and filtering and manipulating the list as desired. In Argo, this operation would be written as: - name : generate-artifacts data : source : # Define a source for the data, only a single \"source\" is permitted artifactPaths : # A predefined source: Generate a list of all artifact paths in a given repository s3 : # Source from an S3 bucket bucket : test endpoint : minio:9000 insecure : true accessKeySecret : name : my-minio-cred key : accesskey secretKeySecret : name : my-minio-cred key : secretkey transformation : # The source is then passed to be transformed by transformations defined here - expression : \"filter(data, {# endsWith \\\".pdf\\\"})\" - expression : \"map(data, {# + \\\".ready\\\"})\"","title":"Introduction"},{"location":"data-sourcing-and-transformation/#spec","text":"A data template must always contain a source . Current available sources: artifactPaths : generates a list of artifact paths from the artifact repository specified A data template may contain any number of transformations (or zero). The transformations will be applied serially in order. Current available transformations: expression : an expr expression. See language definition here . When defining expr expressions Argo will pass the available data to the environment as a variable called data (see example above). We understand that the expression transformation is limited. We intend to greatly expand the functionality of this template with our community's feedback. Please see the link at the top of this document to submit ideas or use cases for this feature.","title":"Spec"},{"location":"debug-pause/","text":"Debug Pause \u00b6 v3.3 and after Introduction \u00b6 The debug pause feature makes it possible to pause individual workflow steps for debugging before, after or both and then release the steps from the paused state. Currently this feature is only supported when using the Emissary Executor In order to pause a container env variables are used: ARGO_DEBUG_PAUSE_AFTER - to pause a step after execution ARGO_DEBUG_PAUSE_BEFORE - to pause a step before execution Example workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' In order to release a step from a pause state, marker files are used named /var/run/argo/ctr/main/after or /var/run/argo/ctr/main/before corresponding to when the step is paused. Pausing steps can be used together with ephemeral containers when a shell is not available in the used container. Example \u00b6 1) Create a workflow where the debug pause env in set, in this example ARGO_DEBUG_PAUSE_AFTER will be set and thus the step will be paused after execution of the user code. pause-after.yaml apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' argo submit -n argo --watch pause-after.yaml Create a shell in the container of interest of create a ephemeral container in the pod, in this example ephemeral containers are used. kubectl debug -n argo -it POD_NAME --image = busybox --target = main --share-processes In order to have access to the persistence volume used by the workflow step, --share-processes will have to be used. The ephemeral container can be used to perform debugging operations. When debugging has been completed, create the marker file to allow the workflow step to continue. When using process name space sharing container file systems are visible to other containers in the pod through the /proc/$pid/root link. touch /proc/1/root/run/argo/ctr/main/after","title":"Debug Pause"},{"location":"debug-pause/#debug-pause","text":"v3.3 and after","title":"Debug Pause"},{"location":"debug-pause/#introduction","text":"The debug pause feature makes it possible to pause individual workflow steps for debugging before, after or both and then release the steps from the paused state. Currently this feature is only supported when using the Emissary Executor In order to pause a container env variables are used: ARGO_DEBUG_PAUSE_AFTER - to pause a step after execution ARGO_DEBUG_PAUSE_BEFORE - to pause a step before execution Example workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' In order to release a step from a pause state, marker files are used named /var/run/argo/ctr/main/after or /var/run/argo/ctr/main/before corresponding to when the step is paused. Pausing steps can be used together with ephemeral containers when a shell is not available in the used container.","title":"Introduction"},{"location":"debug-pause/#example","text":"1) Create a workflow where the debug pause env in set, in this example ARGO_DEBUG_PAUSE_AFTER will be set and thus the step will be paused after execution of the user code. pause-after.yaml apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' argo submit -n argo --watch pause-after.yaml Create a shell in the container of interest of create a ephemeral container in the pod, in this example ephemeral containers are used. kubectl debug -n argo -it POD_NAME --image = busybox --target = main --share-processes In order to have access to the persistence volume used by the workflow step, --share-processes will have to be used. The ephemeral container can be used to perform debugging operations. When debugging has been completed, create the marker file to allow the workflow step to continue. When using process name space sharing container file systems are visible to other containers in the pod through the /proc/$pid/root link. touch /proc/1/root/run/argo/ctr/main/after","title":"Example"},{"location":"default-workflow-specs/","text":"Default Workflow Spec \u00b6 v2.7 and after Introduction \u00b6 Default Workflow spec values can be set at the controller config map that will apply to all Workflows executed from said controller. If a Workflow has a value that also has a default value set in the config map, the Workflow's value will take precedence. Setting Default Workflow Values \u00b6 Default Workflow values can be specified by adding them under the workflowDefaults key in the workflow-controller-configmap . Values can be added as they would under the Workflow.spec tag. For example, to specify default values that would partially produce the following Workflow : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : gc-ttl- annotations : argo : workflows labels : foo : bar spec : ttlStrategy : secondsAfterSuccess : 5 # Time to live after workflow is successful parallelism : 3 The following would be specified in the Config Map: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 parallelism: 3","title":"Default Workflow Spec"},{"location":"default-workflow-specs/#default-workflow-spec","text":"v2.7 and after","title":"Default Workflow Spec"},{"location":"default-workflow-specs/#introduction","text":"Default Workflow spec values can be set at the controller config map that will apply to all Workflows executed from said controller. If a Workflow has a value that also has a default value set in the config map, the Workflow's value will take precedence.","title":"Introduction"},{"location":"default-workflow-specs/#setting-default-workflow-values","text":"Default Workflow values can be specified by adding them under the workflowDefaults key in the workflow-controller-configmap . Values can be added as they would under the Workflow.spec tag. For example, to specify default values that would partially produce the following Workflow : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : gc-ttl- annotations : argo : workflows labels : foo : bar spec : ttlStrategy : secondsAfterSuccess : 5 # Time to live after workflow is successful parallelism : 3 The following would be specified in the Config Map: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 parallelism: 3","title":"Setting Default Workflow Values"},{"location":"disaster-recovery/","text":"Disaster Recovery (DR) \u00b6 We only store data in your Kubernetes cluster. You should consider backing this up regularly. Exporting example: kubectl get wf,cwf,cwft,wftmpl -A -o yaml > backup.yaml Importing example: kubectl apply -f backup.yaml You should also back-up any SQL persistence you use regularly with whatever tool is provided with it.","title":"Disaster Recovery (DR)"},{"location":"disaster-recovery/#disaster-recovery-dr","text":"We only store data in your Kubernetes cluster. You should consider backing this up regularly. Exporting example: kubectl get wf,cwf,cwft,wftmpl -A -o yaml > backup.yaml Importing example: kubectl apply -f backup.yaml You should also back-up any SQL persistence you use regularly with whatever tool is provided with it.","title":"Disaster Recovery (DR)"},{"location":"doc-changes/","text":"Documentation Changes \u00b6 Docs help our customers understand how to use workflows and fix their own problems. Doc changes are checked for spelling, broken links, and lint issues by CI. To check locally, run make docs . General guidelines: Explain when you would want to use a feature. Provide working examples. Format code using back-ticks to avoid it being reported as a spelling error. Prefer 1 sentence per line of markdown Follow the recommendations in the official Kubernetes Documentation Style Guide . Particularly useful sections include Content best practices and Patterns to avoid . Note : Argo does not use the same tooling, so the sections on \"shortcodes\" and \"EditorConfig\" are not relevant. Running Locally \u00b6 To test/run locally: make docs-serve Tips \u00b6 Use a service like Grammarly to check your grammar. Having your computer read text out loud is a way to catch problems, e.g.: Word substitutions (i.e. the wrong word is used, but spelled. correctly). Sentences that do not read correctly will sound wrong. On Mac, to set-up: Go to System Preferences / Accessibility / Spoken Content . Choose a System Voice (I like Siri Voice 1 ). Enable Speak selection . To hear text, select the text you want to hear, then press option+escape.","title":"Documentation Changes"},{"location":"doc-changes/#documentation-changes","text":"Docs help our customers understand how to use workflows and fix their own problems. Doc changes are checked for spelling, broken links, and lint issues by CI. To check locally, run make docs . General guidelines: Explain when you would want to use a feature. Provide working examples. Format code using back-ticks to avoid it being reported as a spelling error. Prefer 1 sentence per line of markdown Follow the recommendations in the official Kubernetes Documentation Style Guide . Particularly useful sections include Content best practices and Patterns to avoid . Note : Argo does not use the same tooling, so the sections on \"shortcodes\" and \"EditorConfig\" are not relevant.","title":"Documentation Changes"},{"location":"doc-changes/#running-locally","text":"To test/run locally: make docs-serve","title":"Running Locally"},{"location":"doc-changes/#tips","text":"Use a service like Grammarly to check your grammar. Having your computer read text out loud is a way to catch problems, e.g.: Word substitutions (i.e. the wrong word is used, but spelled. correctly). Sentences that do not read correctly will sound wrong. On Mac, to set-up: Go to System Preferences / Accessibility / Spoken Content . Choose a System Voice (I like Siri Voice 1 ). Enable Speak selection . To hear text, select the text you want to hear, then press option+escape.","title":"Tips"},{"location":"empty-dir/","text":"Empty Dir \u00b6 While by default, the Docker and PNS workflow executors can get output artifacts/parameters from the base layer (e.g. /tmp ), neither the Kubelet nor the K8SAPI executors can. It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a security context . You can work-around this constraint by mounting volumes onto your pod. The easiest way to do this is to use as emptyDir volume. Note This is only needed for output artifacts/parameters. Input artifacts/parameters are automatically mounted to an empty-dir if needed This example shows how to mount an output volume: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : empty-dir- spec : entrypoint : main templates : - name : main container : image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"cowsay hello world | tee /mnt/out/hello_world.txt\" ] volumeMounts : - name : out mountPath : /mnt/out volumes : - name : out emptyDir : { } outputs : parameters : - name : message valueFrom : path : /mnt/out/hello_world.txt","title":"Empty Dir"},{"location":"empty-dir/#empty-dir","text":"While by default, the Docker and PNS workflow executors can get output artifacts/parameters from the base layer (e.g. /tmp ), neither the Kubelet nor the K8SAPI executors can. It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a security context . You can work-around this constraint by mounting volumes onto your pod. The easiest way to do this is to use as emptyDir volume. Note This is only needed for output artifacts/parameters. Input artifacts/parameters are automatically mounted to an empty-dir if needed This example shows how to mount an output volume: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : empty-dir- spec : entrypoint : main templates : - name : main container : image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"cowsay hello world | tee /mnt/out/hello_world.txt\" ] volumeMounts : - name : out mountPath : /mnt/out volumes : - name : out emptyDir : { } outputs : parameters : - name : message valueFrom : path : /mnt/out/hello_world.txt","title":"Empty Dir"},{"location":"enhanced-depends-logic/","text":"Enhanced Depends Logic \u00b6 v2.9 and after Introduction \u00b6 Previous to version 2.8, the only way to specify dependencies in DAG templates was to use the dependencies field and specify a list of other tasks the current task depends on. This syntax was limiting because it does not allow the user to specify which result of the task to depend on. For example, a task may only be relevant to run if the dependent task succeeded (or failed, etc.). Depends \u00b6 To remedy this, there exists a new field called depends , which allows users to specify dependent tasks, their statuses, as well as any complex boolean logic. The field is a string field and the syntax is expression-like with operands having form . . Examples include task-1.Succeeded , task-2.Failed , task-3.Daemoned . The full list of available task results is as follows: Task Result Description Meaning .Succeeded Task Succeeded Task finished with no error .Failed Task Failed Task exited with a non-0 exit code .Errored Task Errored Task had an error other than a non-0 exit code .Skipped Task Skipped Task was skipped .Omitted Task Omitted Task was omitted .Daemoned Task is Daemoned and is not Pending For convenience, if an omitted task result is equivalent to (task.Succeeded || task.Skipped || task.Daemoned) . For example: depends : \"task || task-2.Failed\" is equivalent to: depends : (task.Succeeded || task.Skipped || task.Daemoned) || task-2.Failed Full boolean logic is also available. Operators include: && || ! Example: depends : \"(task-2.Succeeded || task-2.Skipped) && !task-3.Failed\" In the case that you're depending on a task that uses withItems , you can depend on whether any of the item tasks are successful or all have failed using .AnySucceeded and .AllFailed , for example: depends : \"task-1.AnySucceeded || task-2.AllFailed\" Compatibility with dependencies and dag.task.continueOn \u00b6 This feature is fully compatible with dependencies and conversion is easy. To convert simply join your dependencies with && : dependencies : [ \"A\" , \"B\" , \"C\" ] is equivalent to: depends : \"A && B && C\" Because of the added control found in depends , the dag.task.continueOn is not available when using it. Furthermore, it is not possible to use both dependencies and depends in the same task group.","title":"Enhanced Depends Logic"},{"location":"enhanced-depends-logic/#enhanced-depends-logic","text":"v2.9 and after","title":"Enhanced Depends Logic"},{"location":"enhanced-depends-logic/#introduction","text":"Previous to version 2.8, the only way to specify dependencies in DAG templates was to use the dependencies field and specify a list of other tasks the current task depends on. This syntax was limiting because it does not allow the user to specify which result of the task to depend on. For example, a task may only be relevant to run if the dependent task succeeded (or failed, etc.).","title":"Introduction"},{"location":"enhanced-depends-logic/#depends","text":"To remedy this, there exists a new field called depends , which allows users to specify dependent tasks, their statuses, as well as any complex boolean logic. The field is a string field and the syntax is expression-like with operands having form . . Examples include task-1.Succeeded , task-2.Failed , task-3.Daemoned . The full list of available task results is as follows: Task Result Description Meaning .Succeeded Task Succeeded Task finished with no error .Failed Task Failed Task exited with a non-0 exit code .Errored Task Errored Task had an error other than a non-0 exit code .Skipped Task Skipped Task was skipped .Omitted Task Omitted Task was omitted .Daemoned Task is Daemoned and is not Pending For convenience, if an omitted task result is equivalent to (task.Succeeded || task.Skipped || task.Daemoned) . For example: depends : \"task || task-2.Failed\" is equivalent to: depends : (task.Succeeded || task.Skipped || task.Daemoned) || task-2.Failed Full boolean logic is also available. Operators include: && || ! Example: depends : \"(task-2.Succeeded || task-2.Skipped) && !task-3.Failed\" In the case that you're depending on a task that uses withItems , you can depend on whether any of the item tasks are successful or all have failed using .AnySucceeded and .AllFailed , for example: depends : \"task-1.AnySucceeded || task-2.AllFailed\"","title":"Depends"},{"location":"enhanced-depends-logic/#compatibility-with-dependencies-and-dagtaskcontinueon","text":"This feature is fully compatible with dependencies and conversion is easy. To convert simply join your dependencies with && : dependencies : [ \"A\" , \"B\" , \"C\" ] is equivalent to: depends : \"A && B && C\" Because of the added control found in depends , the dag.task.continueOn is not available when using it. Furthermore, it is not possible to use both dependencies and depends in the same task group.","title":"Compatibility with dependencies and dag.task.continueOn"},{"location":"environment-variables/","text":"Environment Variables \u00b6 This document outlines environment variables that can be used to customize behavior. Warning Environment variables are typically added to test out experimental features and should not be used by most users. Environment variables may be removed at any time. Controller \u00b6 Name Type Default Description ARGO_AGENT_TASK_WORKERS int 16 The number of task workers for the agent pod. ALL_POD_CHANGES_SIGNIFICANT bool false Whether to consider all pod changes as significant during pod reconciliation. ALWAYS_OFFLOAD_NODE_STATUS bool false Whether to always offload the node status. ARCHIVED_WORKFLOW_GC_PERIOD time.Duration 24h The periodicity for GC of archived workflows. ARGO_PPROF bool false Enable pprof endpoints ARGO_PROGRESS_PATCH_TICK_DURATION time.Duration 1m How often self reported progress is patched into the pod annotations which means how long it takes until the controller picks up the progress change. Set to 0 to disable self reporting progress. ARGO_PROGRESS_FILE_TICK_DURATION time.Duration 3s How often the progress file is read by the executor. Set to 0 to disable self reporting progress. ARGO_REMOVE_PVC_PROTECTION_FINALIZER bool true Remove the kubernetes.io/pvc-protection finalizer from persistent volume claims (PVC) after marking PVCs created for the workflow for deletion, so deleted is not blocked until the pods are deleted. #6629 ARGO_TRACE string `` Whether to enable tracing statements in Argo components. ARGO_AGENT_PATCH_RATE time.Duration DEFAULT_REQUEUE_TIME Rate that the Argo Agent will patch the workflow task-set. ARGO_AGENT_CPU_LIMIT resource.Quantity 100m CPU resource limit for the agent. ARGO_AGENT_MEMORY_LIMIT resource.Quantity 256m Memory resource limit for the agent. BUBBLE_ENTRY_TEMPLATE_ERR bool true Whether to bubble up template errors to workflow. CACHE_GC_PERIOD time.Duration 0s How often to perform memoization cache GC, which is disabled by default and can be enabled by providing a non-zero duration. CACHE_GC_AFTER_NOT_HIT_DURATION time.Duration 30s When a memoization cache has not been hit after this duration, it will be deleted. CRON_SYNC_PERIOD time.Duration 10s How often to sync cron workflows. DEFAULT_REQUEUE_TIME time.Duration 10s The re-queue time for the rate limiter of the workflow queue. DISABLE_MAX_RECURSION bool false Set to true to disable the recursion preventer, which will stop a workflow running which has called into a child template 100 times EXPRESSION_TEMPLATES bool true Escape hatch to disable expression templates. EVENT_AGGREGATION_WITH_ANNOTATIONS bool false Whether event annotations will be used when aggregating events. GZIP_IMPLEMENTATION string PGZip The implementation of compression/decompression. Currently only \" PGZip \" and \" GZip \" are supported. INFORMER_WRITE_BACK bool true Whether to write back to informer instead of catching up. HEALTHZ_AGE time.Duration 5m How old a un-reconciled workflow is to report unhealthy. INDEX_WORKFLOW_SEMAPHORE_KEYS bool true Whether or not to index semaphores. LEADER_ELECTION_IDENTITY string Controller's metadata.name The ID used for workflow controllers to elect a leader. LEADER_ELECTION_DISABLE bool false Whether leader election should be disabled. LEADER_ELECTION_LEASE_DURATION time.Duration 15s The duration that non-leader candidates will wait to force acquire leadership. LEADER_ELECTION_RENEW_DEADLINE time.Duration 10s The duration that the acting master will retry refreshing leadership before giving up. LEADER_ELECTION_RETRY_PERIOD time.Duration 5s The duration that the leader election clients should wait between tries of actions. MAX_OPERATION_TIME time.Duration 30s The maximum time a workflow operation is allowed to run for before re-queuing the workflow onto the work queue. OFFLOAD_NODE_STATUS_TTL time.Duration 5m The TTL to delete the offloaded node status. Currently only used for testing. OPERATION_DURATION_METRIC_BUCKET_COUNT int 6 The number of buckets to collect the metric for the operation duration. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Argo Server. RECENTLY_STARTED_POD_DURATION time.Duration 10s The duration of a pod before the pod is considered to be recently started. RETRY_BACKOFF_DURATION time.Duration 10ms The retry back-off duration when retrying API calls. RETRY_BACKOFF_FACTOR float 2.0 The retry back-off factor when retrying API calls. RETRY_BACKOFF_STEPS int 5 The retry back-off steps when retrying API calls. RETRY_HOST_NAME_LABEL_KEY string kubernetes.io/hostname The label key for host name used when retrying templates. TRANSIENT_ERROR_PATTERN string \"\" The regular expression that represents additional patterns for transient errors. WF_DEL_PROPAGATION_POLICY string \"\" The deletion propagation policy for workflows. WORKFLOW_GC_PERIOD time.Duration 5m The periodicity for GC of workflows. SEMAPHORE_NOTIFY_DELAY time.Duration 1s Tuning Delay when notifying semaphore waiters about availability in the semaphore CLI parameters of the Controller can be specified as environment variables with the ARGO_ prefix. For example: workflow-controller --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo workflow-controller You can set environment variables for the Controller Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : selector : matchLabels : app : workflow-controller template : metadata : labels : app : workflow-controller spec : containers : - env : - name : WORKFLOW_GC_PERIOD value : 30s Executor \u00b6 Name Type Default Description EXECUTOR_RETRY_BACKOFF_DURATION time.Duration 1s The retry back-off duration when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_FACTOR float 1.6 The retry back-off factor when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_JITTER float 0.5 The retry back-off jitter when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_STEPS int 5 The retry back-off steps when the workflow executor performs retries. REMOVE_LOCAL_ART_PATH bool false Whether to remove local artifacts. RESOURCE_STATE_CHECK_INTERVAL time.Duration 5s The time interval between resource status checks against the specified success and failure conditions. WAIT_CONTAINER_STATUS_CHECK_INTERVAL time.Duration 5s The time interval for wait container to check whether the containers have completed. You can set environment variables for the Executor in your workflow-controller-configmap like the following: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | executor: env: - name: RESOURCE_STATE_CHECK_INTERVAL value: 3s Argo Server \u00b6 Name Type Default Description DISABLE_VALUE_LIST_RETRIEVAL_KEY_PATTERN string \"\" Disable the retrieval of the list of label values for keys based on this regular expression. FIRST_TIME_USER_MODAL bool true Show this modal. FEEDBACK_MODAL bool true Show this modal. NEW_VERSION_MODAL bool true Show this modal. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Controller GRPC_MESSAGE_SIZE string 104857600 Use different GRPC Max message size for Server (supporting huge workflows). CLI parameters of the Server can be specified as environment variables with the ARGO_ prefix. For example: argo server --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo argo server You can set environment variables for the Server Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server image : argoproj/argocli:latest name : argo-server env : - name : GRPC_MESSAGE_SIZE value : \"209715200\" ports : # ...","title":"Environment Variables"},{"location":"environment-variables/#environment-variables","text":"This document outlines environment variables that can be used to customize behavior. Warning Environment variables are typically added to test out experimental features and should not be used by most users. Environment variables may be removed at any time.","title":"Environment Variables"},{"location":"environment-variables/#controller","text":"Name Type Default Description ARGO_AGENT_TASK_WORKERS int 16 The number of task workers for the agent pod. ALL_POD_CHANGES_SIGNIFICANT bool false Whether to consider all pod changes as significant during pod reconciliation. ALWAYS_OFFLOAD_NODE_STATUS bool false Whether to always offload the node status. ARCHIVED_WORKFLOW_GC_PERIOD time.Duration 24h The periodicity for GC of archived workflows. ARGO_PPROF bool false Enable pprof endpoints ARGO_PROGRESS_PATCH_TICK_DURATION time.Duration 1m How often self reported progress is patched into the pod annotations which means how long it takes until the controller picks up the progress change. Set to 0 to disable self reporting progress. ARGO_PROGRESS_FILE_TICK_DURATION time.Duration 3s How often the progress file is read by the executor. Set to 0 to disable self reporting progress. ARGO_REMOVE_PVC_PROTECTION_FINALIZER bool true Remove the kubernetes.io/pvc-protection finalizer from persistent volume claims (PVC) after marking PVCs created for the workflow for deletion, so deleted is not blocked until the pods are deleted. #6629 ARGO_TRACE string `` Whether to enable tracing statements in Argo components. ARGO_AGENT_PATCH_RATE time.Duration DEFAULT_REQUEUE_TIME Rate that the Argo Agent will patch the workflow task-set. ARGO_AGENT_CPU_LIMIT resource.Quantity 100m CPU resource limit for the agent. ARGO_AGENT_MEMORY_LIMIT resource.Quantity 256m Memory resource limit for the agent. BUBBLE_ENTRY_TEMPLATE_ERR bool true Whether to bubble up template errors to workflow. CACHE_GC_PERIOD time.Duration 0s How often to perform memoization cache GC, which is disabled by default and can be enabled by providing a non-zero duration. CACHE_GC_AFTER_NOT_HIT_DURATION time.Duration 30s When a memoization cache has not been hit after this duration, it will be deleted. CRON_SYNC_PERIOD time.Duration 10s How often to sync cron workflows. DEFAULT_REQUEUE_TIME time.Duration 10s The re-queue time for the rate limiter of the workflow queue. DISABLE_MAX_RECURSION bool false Set to true to disable the recursion preventer, which will stop a workflow running which has called into a child template 100 times EXPRESSION_TEMPLATES bool true Escape hatch to disable expression templates. EVENT_AGGREGATION_WITH_ANNOTATIONS bool false Whether event annotations will be used when aggregating events. GZIP_IMPLEMENTATION string PGZip The implementation of compression/decompression. Currently only \" PGZip \" and \" GZip \" are supported. INFORMER_WRITE_BACK bool true Whether to write back to informer instead of catching up. HEALTHZ_AGE time.Duration 5m How old a un-reconciled workflow is to report unhealthy. INDEX_WORKFLOW_SEMAPHORE_KEYS bool true Whether or not to index semaphores. LEADER_ELECTION_IDENTITY string Controller's metadata.name The ID used for workflow controllers to elect a leader. LEADER_ELECTION_DISABLE bool false Whether leader election should be disabled. LEADER_ELECTION_LEASE_DURATION time.Duration 15s The duration that non-leader candidates will wait to force acquire leadership. LEADER_ELECTION_RENEW_DEADLINE time.Duration 10s The duration that the acting master will retry refreshing leadership before giving up. LEADER_ELECTION_RETRY_PERIOD time.Duration 5s The duration that the leader election clients should wait between tries of actions. MAX_OPERATION_TIME time.Duration 30s The maximum time a workflow operation is allowed to run for before re-queuing the workflow onto the work queue. OFFLOAD_NODE_STATUS_TTL time.Duration 5m The TTL to delete the offloaded node status. Currently only used for testing. OPERATION_DURATION_METRIC_BUCKET_COUNT int 6 The number of buckets to collect the metric for the operation duration. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Argo Server. RECENTLY_STARTED_POD_DURATION time.Duration 10s The duration of a pod before the pod is considered to be recently started. RETRY_BACKOFF_DURATION time.Duration 10ms The retry back-off duration when retrying API calls. RETRY_BACKOFF_FACTOR float 2.0 The retry back-off factor when retrying API calls. RETRY_BACKOFF_STEPS int 5 The retry back-off steps when retrying API calls. RETRY_HOST_NAME_LABEL_KEY string kubernetes.io/hostname The label key for host name used when retrying templates. TRANSIENT_ERROR_PATTERN string \"\" The regular expression that represents additional patterns for transient errors. WF_DEL_PROPAGATION_POLICY string \"\" The deletion propagation policy for workflows. WORKFLOW_GC_PERIOD time.Duration 5m The periodicity for GC of workflows. SEMAPHORE_NOTIFY_DELAY time.Duration 1s Tuning Delay when notifying semaphore waiters about availability in the semaphore CLI parameters of the Controller can be specified as environment variables with the ARGO_ prefix. For example: workflow-controller --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo workflow-controller You can set environment variables for the Controller Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : selector : matchLabels : app : workflow-controller template : metadata : labels : app : workflow-controller spec : containers : - env : - name : WORKFLOW_GC_PERIOD value : 30s","title":"Controller"},{"location":"environment-variables/#executor","text":"Name Type Default Description EXECUTOR_RETRY_BACKOFF_DURATION time.Duration 1s The retry back-off duration when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_FACTOR float 1.6 The retry back-off factor when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_JITTER float 0.5 The retry back-off jitter when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_STEPS int 5 The retry back-off steps when the workflow executor performs retries. REMOVE_LOCAL_ART_PATH bool false Whether to remove local artifacts. RESOURCE_STATE_CHECK_INTERVAL time.Duration 5s The time interval between resource status checks against the specified success and failure conditions. WAIT_CONTAINER_STATUS_CHECK_INTERVAL time.Duration 5s The time interval for wait container to check whether the containers have completed. You can set environment variables for the Executor in your workflow-controller-configmap like the following: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | executor: env: - name: RESOURCE_STATE_CHECK_INTERVAL value: 3s","title":"Executor"},{"location":"environment-variables/#argo-server","text":"Name Type Default Description DISABLE_VALUE_LIST_RETRIEVAL_KEY_PATTERN string \"\" Disable the retrieval of the list of label values for keys based on this regular expression. FIRST_TIME_USER_MODAL bool true Show this modal. FEEDBACK_MODAL bool true Show this modal. NEW_VERSION_MODAL bool true Show this modal. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Controller GRPC_MESSAGE_SIZE string 104857600 Use different GRPC Max message size for Server (supporting huge workflows). CLI parameters of the Server can be specified as environment variables with the ARGO_ prefix. For example: argo server --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo argo server You can set environment variables for the Server Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server image : argoproj/argocli:latest name : argo-server env : - name : GRPC_MESSAGE_SIZE value : \"209715200\" ports : # ...","title":"Argo Server"},{"location":"estimated-duration/","text":"Estimated Duration \u00b6 v2.12 and after When you run a workflow, the controller will try to estimate its duration. This is based on the most recently successful workflow submitted from the same workflow template, cluster workflow template or cron workflow. To get this data, the controller queries the Kubernetes API first (as this is faster) and then workflow archive (if enabled). If you've used tools like Jenkins, you'll know that that estimates can be inaccurate: A pod spent a long amount of time pending scheduling. The workflow is non-deterministic, e.g. it uses when to execute different paths. The workflow can vary is scale, e.g. sometimes it uses withItems and so sometimes run 100 nodes, sometimes a 1000. If the pod runtimes are unpredictable. The workflow is parametrized, and different parameters affect its duration.","title":"Estimated Duration"},{"location":"estimated-duration/#estimated-duration","text":"v2.12 and after When you run a workflow, the controller will try to estimate its duration. This is based on the most recently successful workflow submitted from the same workflow template, cluster workflow template or cron workflow. To get this data, the controller queries the Kubernetes API first (as this is faster) and then workflow archive (if enabled). If you've used tools like Jenkins, you'll know that that estimates can be inaccurate: A pod spent a long amount of time pending scheduling. The workflow is non-deterministic, e.g. it uses when to execute different paths. The workflow can vary is scale, e.g. sometimes it uses withItems and so sometimes run 100 nodes, sometimes a 1000. If the pod runtimes are unpredictable. The workflow is parametrized, and different parameters affect its duration.","title":"Estimated Duration"},{"location":"events/","text":"Events \u00b6 v2.11 and after Overview \u00b6 To support external webhooks, we have this endpoint /api/v1/events/{namespace}/{discriminator} . Events sent to that can be any JSON data. These events can submit workflow templates or cluster workflow templates . You may also wish to read about webhooks . Authentication and Security \u00b6 Clients wanting to send events to the endpoint need an access token . It is only possible to submit workflow templates your access token has access to: example role . Example (note the trailing slash): curl https://localhost:2746/api/v1/events/argo/ \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' With a discriminator : curl https://localhost:2746/api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' The event endpoint will always return in under 10 seconds because the event will be queued and processed asynchronously. This means you will not be notified synchronously of failure. It will return a failure (503) if the event processing queue is full. Processing Order Events may not always be processed in the order they are received. Workflow Template triggered by the event \u00b6 Before the binding between an event and a workflow template, you must create the workflow template that you want to trigger. The following one takes in input the \"message\" parameter specified into the API call body, passed through the WorkflowEventBinding parameters section, and finally resolved here as the message of the whalesay image. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : my-wf-tmple namespace : argo spec : templates : - name : main inputs : parameters : - name : message value : \"{{workflow.parameters.message}}\" container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] entrypoint : main Submitting A Workflow From A Workflow Template \u00b6 A workflow template will be submitted (i.e. workflow created from it) and that can be created using parameters from the event itself. The following example will be triggered by an event with \"message\" in the payload. That message will be used as an argument for the created workflow. Note that the name of the meta-data header \"x-argo-e2e\" is lowercase in the selector to match. Incoming header names are converted to lowercase. apiVersion : argoproj.io/v1alpha1 kind : WorkflowEventBinding metadata : name : event-consumer spec : event : # metadata header name must be lowercase to match in selector selector : payload.message != \"\" && metadata[\"x-argo-e2e\"] == [\"true\"] && discriminator == \"my-discriminator\" submit : workflowTemplateRef : name : my-wf-tmple arguments : parameters : - name : message valueFrom : event : payload.message Please, notice that workflowTemplateRef refers to a template with the name my-wf-tmple , this template has to be created before the triggering of the event. After that you have to apply the above explained WorkflowEventBinding (in this example this is called event-template.yml ) to realize the binding between Workflow Template and event (you can use kubectl to do that): kubectl apply -f event-template.yml Finally you can trigger the creation of your first parametrized workflow template, by using the following call: Event: curl $ARGO_SERVER /api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -H \"X-Argo-E2E: true\" \\ -d '{\"message\": \"hello events\"}' Malformed Expressions If the expression is malformed, this is logged. It is not visible in logs or the UI. Customizing the Workflow Meta-Data \u00b6 You can customize the name of the submitted workflow as well as add annotations and labels. This is done by adding a metadata object to the submit object. Normally the name of the workflow created from an event is simply the name of the template with a time-stamp appended. This can be customized by setting the name in the metadata object. Annotations and labels are added in the same fashion. All the values for the name, annotations and labels are treated as expressions (see below for details). The metadata object is the same metadata type as on all Kubernetes resources and as such is parsed in the same manner. It is best to enclose the expression in single quotes to avoid any problems when submitting the event binding to Kubernetes. This is an example snippet of how to set the name, annotations and labels. This is based on the workflow binding from above, and the first event. submit : metadata : annotations : anAnnotation : 'event.payload.message' name : 'event.payload.message + \"-world\"' labels : someLabel : '\"literal string\"' This will result in the workflow being named \"hello-world\" instead of my-wf-tmple- . There will be an extra label with the key someLabel and a value of \"literal string\". There will also be an extra annotation with the key anAnnotation and a value of \"hello\" Be careful when setting the name. If the name expression evaluates to that of a currently existing workflow, the new workflow will fail to submit. The name, annotation and label expression must evaluate to a string and follow the normal Kubernetes naming requirements . Event Expression Syntax and the Event Expression Environment \u00b6 Event expressions are expressions that are evaluated over the event expression environment . Expression Syntax \u00b6 Because the endpoint accepts any JSON data, it is the user's responsibility to write a suitable expression to correctly filter the events they are interested in. Therefore, DO NOT assume the existence of any fields, and guard against them using a nil check. Learn more about expression syntax . Expression Environment \u00b6 The event environment contains: payload the event payload. metadata event meta-data, including HTTP headers. discriminator the discriminator from the URL. Payload \u00b6 This is the JSON payload of the event. Example: payload.repository.clone_url == \"http://gihub.com/argoproj/argo\" Meta-Data \u00b6 Meta-data is data about the event, this includes headers : Headers \u00b6 HTTP header names are lowercase and only include those that have x- as their prefix. Their values are lists, not single values. Wrong: metadata[\"X-Github-Event\"] == \"push\" Wrong: metadata[\"x-github-event\"] == \"push\" Wrong: metadata[\"X-Github-Event\"] == [\"push\"] Wrong: metadata[\"github-event\"] == [\"push\"] Wrong: metadata[\"authorization\"] == [\"push\"] Right: metadata[\"x-github-event\"] == [\"push\"] Example: metadata[\"x-argo\"] == [\"yes\"] Discriminator \u00b6 This is only for edge-cases where neither the payload, or meta-data provide enough information to discriminate. Typically, it should be empty and ignored. Example: discriminator == \"my-discriminator\" High-Availability \u00b6 Run Minimum 2 Replicas You MUST run a minimum of two Argo Server replicas if you do not want to lose events. If you are processing large numbers of events, you may need to scale up the Argo Server to handle them. By default, a single Argo Server can be processing 64 events before the endpoint will start returning 503 errors. Vertically you can: Increase the size of the event operation queue --event-operation-queue-size (good for temporary event bursts). Increase the number of workers --event-worker-count (good for sustained numbers of events). Horizontally you can: Run more Argo Servers (good for sustained numbers of events AND high-availability).","title":"Events"},{"location":"events/#events","text":"v2.11 and after","title":"Events"},{"location":"events/#overview","text":"To support external webhooks, we have this endpoint /api/v1/events/{namespace}/{discriminator} . Events sent to that can be any JSON data. These events can submit workflow templates or cluster workflow templates . You may also wish to read about webhooks .","title":"Overview"},{"location":"events/#authentication-and-security","text":"Clients wanting to send events to the endpoint need an access token . It is only possible to submit workflow templates your access token has access to: example role . Example (note the trailing slash): curl https://localhost:2746/api/v1/events/argo/ \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' With a discriminator : curl https://localhost:2746/api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' The event endpoint will always return in under 10 seconds because the event will be queued and processed asynchronously. This means you will not be notified synchronously of failure. It will return a failure (503) if the event processing queue is full. Processing Order Events may not always be processed in the order they are received.","title":"Authentication and Security"},{"location":"events/#workflow-template-triggered-by-the-event","text":"Before the binding between an event and a workflow template, you must create the workflow template that you want to trigger. The following one takes in input the \"message\" parameter specified into the API call body, passed through the WorkflowEventBinding parameters section, and finally resolved here as the message of the whalesay image. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : my-wf-tmple namespace : argo spec : templates : - name : main inputs : parameters : - name : message value : \"{{workflow.parameters.message}}\" container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] entrypoint : main","title":"Workflow Template triggered by the event"},{"location":"events/#submitting-a-workflow-from-a-workflow-template","text":"A workflow template will be submitted (i.e. workflow created from it) and that can be created using parameters from the event itself. The following example will be triggered by an event with \"message\" in the payload. That message will be used as an argument for the created workflow. Note that the name of the meta-data header \"x-argo-e2e\" is lowercase in the selector to match. Incoming header names are converted to lowercase. apiVersion : argoproj.io/v1alpha1 kind : WorkflowEventBinding metadata : name : event-consumer spec : event : # metadata header name must be lowercase to match in selector selector : payload.message != \"\" && metadata[\"x-argo-e2e\"] == [\"true\"] && discriminator == \"my-discriminator\" submit : workflowTemplateRef : name : my-wf-tmple arguments : parameters : - name : message valueFrom : event : payload.message Please, notice that workflowTemplateRef refers to a template with the name my-wf-tmple , this template has to be created before the triggering of the event. After that you have to apply the above explained WorkflowEventBinding (in this example this is called event-template.yml ) to realize the binding between Workflow Template and event (you can use kubectl to do that): kubectl apply -f event-template.yml Finally you can trigger the creation of your first parametrized workflow template, by using the following call: Event: curl $ARGO_SERVER /api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -H \"X-Argo-E2E: true\" \\ -d '{\"message\": \"hello events\"}' Malformed Expressions If the expression is malformed, this is logged. It is not visible in logs or the UI.","title":"Submitting A Workflow From A Workflow Template"},{"location":"events/#customizing-the-workflow-meta-data","text":"You can customize the name of the submitted workflow as well as add annotations and labels. This is done by adding a metadata object to the submit object. Normally the name of the workflow created from an event is simply the name of the template with a time-stamp appended. This can be customized by setting the name in the metadata object. Annotations and labels are added in the same fashion. All the values for the name, annotations and labels are treated as expressions (see below for details). The metadata object is the same metadata type as on all Kubernetes resources and as such is parsed in the same manner. It is best to enclose the expression in single quotes to avoid any problems when submitting the event binding to Kubernetes. This is an example snippet of how to set the name, annotations and labels. This is based on the workflow binding from above, and the first event. submit : metadata : annotations : anAnnotation : 'event.payload.message' name : 'event.payload.message + \"-world\"' labels : someLabel : '\"literal string\"' This will result in the workflow being named \"hello-world\" instead of my-wf-tmple- . There will be an extra label with the key someLabel and a value of \"literal string\". There will also be an extra annotation with the key anAnnotation and a value of \"hello\" Be careful when setting the name. If the name expression evaluates to that of a currently existing workflow, the new workflow will fail to submit. The name, annotation and label expression must evaluate to a string and follow the normal Kubernetes naming requirements .","title":"Customizing the Workflow Meta-Data"},{"location":"events/#event-expression-syntax-and-the-event-expression-environment","text":"Event expressions are expressions that are evaluated over the event expression environment .","title":"Event Expression Syntax and the Event Expression Environment"},{"location":"events/#expression-syntax","text":"Because the endpoint accepts any JSON data, it is the user's responsibility to write a suitable expression to correctly filter the events they are interested in. Therefore, DO NOT assume the existence of any fields, and guard against them using a nil check. Learn more about expression syntax .","title":"Expression Syntax"},{"location":"events/#expression-environment","text":"The event environment contains: payload the event payload. metadata event meta-data, including HTTP headers. discriminator the discriminator from the URL.","title":"Expression Environment"},{"location":"events/#payload","text":"This is the JSON payload of the event. Example: payload.repository.clone_url == \"http://gihub.com/argoproj/argo\"","title":"Payload"},{"location":"events/#meta-data","text":"Meta-data is data about the event, this includes headers :","title":"Meta-Data"},{"location":"events/#headers","text":"HTTP header names are lowercase and only include those that have x- as their prefix. Their values are lists, not single values. Wrong: metadata[\"X-Github-Event\"] == \"push\" Wrong: metadata[\"x-github-event\"] == \"push\" Wrong: metadata[\"X-Github-Event\"] == [\"push\"] Wrong: metadata[\"github-event\"] == [\"push\"] Wrong: metadata[\"authorization\"] == [\"push\"] Right: metadata[\"x-github-event\"] == [\"push\"] Example: metadata[\"x-argo\"] == [\"yes\"]","title":"Headers"},{"location":"events/#discriminator","text":"This is only for edge-cases where neither the payload, or meta-data provide enough information to discriminate. Typically, it should be empty and ignored. Example: discriminator == \"my-discriminator\"","title":"Discriminator"},{"location":"events/#high-availability","text":"Run Minimum 2 Replicas You MUST run a minimum of two Argo Server replicas if you do not want to lose events. If you are processing large numbers of events, you may need to scale up the Argo Server to handle them. By default, a single Argo Server can be processing 64 events before the endpoint will start returning 503 errors. Vertically you can: Increase the size of the event operation queue --event-operation-queue-size (good for temporary event bursts). Increase the number of workers --event-worker-count (good for sustained numbers of events). Horizontally you can: Run more Argo Servers (good for sustained numbers of events AND high-availability).","title":"High-Availability"},{"location":"executor_plugins/","text":"Executor Plugins \u00b6 Since v3.3 Configuration \u00b6 Plugins are disabled by default. To enable them, start the controller with ARGO_EXECUTOR_PLUGINS=true , e.g. apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : template : spec : containers : - name : workflow-controller env : - name : ARGO_EXECUTOR_PLUGINS value : \"true\" When using the Helm chart , add this to your values.yaml : controller : extraEnv : - name : ARGO_EXECUTOR_PLUGINS value : \"true\" Template Executor \u00b6 This is a plugin that runs custom \"plugin\" templates, e.g. for non-pod tasks such as Tekton builds, Spark jobs, sending Slack notifications. A Simple Python Plugin \u00b6 Let's make a Python plugin that prints \"hello\" each time the workflow is operated on. We need the following: Plugins enabled (see above). A HTTP server that will be run as a sidecar to the main container and will respond to RPC HTTP requests from the executor with this API contract . A plugin.yaml configuration file, that is turned into a config map so the controller can discover the plugin. A template executor plugin services HTTP POST requests on /api/v1/template.execute : curl http://localhost:4355/api/v1/template.execute -d \\ '{ \"workflow\": { \"metadata\": { \"name\": \"my-wf\" } }, \"template\": { \"name\": \"my-tmpl\", \"inputs\": {}, \"outputs\": {}, \"plugin\": { \"hello\": {} } } }' # ... HTTP/1.1 200 OK { \"node\" : { \"phase\" : \"Succeeded\" , \"message\" : \"Hello template!\" } } Tip: The port number can be anything, but must not conflict with other plugins. Don't use common ports such as 80, 443, 8080, 8081, 8443. If you plan to publish your plugin, choose a random port number under 10,000 and create a PR to add your plugin. If not, use a port number greater than 10,000. We'll need to create a script that starts a HTTP server. Save this as server.py : import json from http.server import BaseHTTPRequestHandler , HTTPServer with open ( \"/var/run/argo/token\" ) as f : token = f . read () . strip () class Plugin ( BaseHTTPRequestHandler ): def args ( self ): return json . loads ( self . rfile . read ( int ( self . headers . get ( 'Content-Length' )))) def reply ( self , reply ): self . send_response ( 200 ) self . end_headers () self . wfile . write ( json . dumps ( reply ) . encode ( \"UTF-8\" )) def forbidden ( self ): self . send_response ( 403 ) self . end_headers () def unsupported ( self ): self . send_response ( 404 ) self . end_headers () def do_POST ( self ): if self . headers . get ( \"Authorization\" ) != \"Bearer \" + token : self . forbidden () elif self . path == '/api/v1/template.execute' : args = self . args () if 'hello' in args [ 'template' ] . get ( 'plugin' , {}): self . reply ( { 'node' : { 'phase' : 'Succeeded' , 'message' : 'Hello template!' , 'outputs' : { 'parameters' : [{ 'name' : 'foo' , 'value' : 'bar' }]}}}) else : self . reply ({}) else : self . unsupported () if __name__ == '__main__' : httpd = HTTPServer (( '' , 4355 ), Plugin ) httpd . serve_forever () Tip : Plugins can be written in any language you can run as a container. Python is convenient because you can embed the script in the container. Some things to note here: You only need to implement the calls you need. Return 404 and it won't be called again. The path is the RPC method name. You should check that the Authorization header contains the same value as /var/run/argo/token . Return 403 if not The request body contains the template's input parameters. The response body may contain the node's result, including the phase (e.g. \"Succeeded\" or \"Failed\") and a message. If the response is {} , then the plugin is saying it cannot execute the plugin template, e.g. it is a Slack plugin, but the template is a Tekton job. If the status code is 404, then the plugin will not be called again. If you save the file as server.* , it will be copied to the sidecar container's args field. This is useful for building self-contained plugins in scripting languages like Python or Node.JS. Next, create a manifest named plugin.yaml : apiVersion : argoproj.io/v1alpha1 kind : ExecutorPlugin metadata : name : hello spec : sidecar : container : command : - python - -u # disables output buffering - -c image : python:alpine3.6 name : hello-executor-plugin ports : - containerPort : 4355 securityContext : runAsNonRoot : true runAsUser : 65534 # nobody resources : requests : memory : \"64Mi\" cpu : \"250m\" limits : memory : \"128Mi\" cpu : \"500m\" Build and install as follows: argo executor-plugin build . kubectl -n argo apply -f hello-executor-plugin-configmap.yaml Check your controller logs: level=info msg=\"Executor plugin added\" name=hello-controller-plugin Run this workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello- spec : entrypoint : main templates : - name : main plugin : hello : { } You'll see the workflow complete successfully. Discovery \u00b6 When a workflow is run, plugins are loaded from: The workflow's namespace. The Argo installation namespace (typically argo ). If two plugins have the same name, only the one in the workflow's namespace is loaded. Secrets \u00b6 If you interact with a third-party system, you'll need access to secrets. Don't put them in plugin.yaml . Use a secret: spec : sidecar : container : env : - name : URL valueFrom : secretKeyRef : name : slack-executor-plugin key : URL Refer to the Kubernetes Secret documentation for secret best practices and security considerations. Resources, Security Context \u00b6 We made these mandatory, so no one can create a plugin that uses an unreasonable amount of memory, or run as root unless they deliberately do so: spec : sidecar : container : resources : requests : cpu : 100m memory : 32Mi limits : cpu : 200m memory : 64Mi securityContext : runAsNonRoot : true runAsUser : 1000 Failure \u00b6 A plugin may fail as follows: Connection/socket error - considered transient. Timeout - considered transient. 404 error - method is not supported by the plugin, as a result the method will not be called again (in the same workflow). 503 error - considered transient. Other 4xx/5xx errors - considered fatal. Transient errors are retried, all other errors are considered fatal. Fatal errors will result in failed steps. Re-Queue \u00b6 It might be the case that the plugin can't finish straight away. E.g. it starts a long running task. When that happens, you return \"Pending\" or \"Running\" a and a re-queue time: { \"node\" : { \"phase\" : \"Running\" , \"message\" : \"Long-running task started\" }, \"requeue\" : \"2m\" } In this example, the task will be re-queued and template.execute will be called again in 2 minutes. Debugging \u00b6 You can find the plugin's log in the agent pod's sidecar, e.g.: kubectl -n argo logs ${ agentPodName } -c hello-executor-plugin Listing Plugins \u00b6 Because plugins are just config maps, you can list them using kubectl : kubectl get cm -l workflows.argoproj.io/configmap-type = ExecutorPlugin Examples and Community Contributed Plugins \u00b6 Plugin directory Publishing Your Plugin \u00b6 If you want to publish and share you plugin (we hope you do!), then submit a pull request to add it to the above directory.","title":"Executor Plugins"},{"location":"executor_plugins/#executor-plugins","text":"Since v3.3","title":"Executor Plugins"},{"location":"executor_plugins/#configuration","text":"Plugins are disabled by default. To enable them, start the controller with ARGO_EXECUTOR_PLUGINS=true , e.g. apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : template : spec : containers : - name : workflow-controller env : - name : ARGO_EXECUTOR_PLUGINS value : \"true\" When using the Helm chart , add this to your values.yaml : controller : extraEnv : - name : ARGO_EXECUTOR_PLUGINS value : \"true\"","title":"Configuration"},{"location":"executor_plugins/#template-executor","text":"This is a plugin that runs custom \"plugin\" templates, e.g. for non-pod tasks such as Tekton builds, Spark jobs, sending Slack notifications.","title":"Template Executor"},{"location":"executor_plugins/#a-simple-python-plugin","text":"Let's make a Python plugin that prints \"hello\" each time the workflow is operated on. We need the following: Plugins enabled (see above). A HTTP server that will be run as a sidecar to the main container and will respond to RPC HTTP requests from the executor with this API contract . A plugin.yaml configuration file, that is turned into a config map so the controller can discover the plugin. A template executor plugin services HTTP POST requests on /api/v1/template.execute : curl http://localhost:4355/api/v1/template.execute -d \\ '{ \"workflow\": { \"metadata\": { \"name\": \"my-wf\" } }, \"template\": { \"name\": \"my-tmpl\", \"inputs\": {}, \"outputs\": {}, \"plugin\": { \"hello\": {} } } }' # ... HTTP/1.1 200 OK { \"node\" : { \"phase\" : \"Succeeded\" , \"message\" : \"Hello template!\" } } Tip: The port number can be anything, but must not conflict with other plugins. Don't use common ports such as 80, 443, 8080, 8081, 8443. If you plan to publish your plugin, choose a random port number under 10,000 and create a PR to add your plugin. If not, use a port number greater than 10,000. We'll need to create a script that starts a HTTP server. Save this as server.py : import json from http.server import BaseHTTPRequestHandler , HTTPServer with open ( \"/var/run/argo/token\" ) as f : token = f . read () . strip () class Plugin ( BaseHTTPRequestHandler ): def args ( self ): return json . loads ( self . rfile . read ( int ( self . headers . get ( 'Content-Length' )))) def reply ( self , reply ): self . send_response ( 200 ) self . end_headers () self . wfile . write ( json . dumps ( reply ) . encode ( \"UTF-8\" )) def forbidden ( self ): self . send_response ( 403 ) self . end_headers () def unsupported ( self ): self . send_response ( 404 ) self . end_headers () def do_POST ( self ): if self . headers . get ( \"Authorization\" ) != \"Bearer \" + token : self . forbidden () elif self . path == '/api/v1/template.execute' : args = self . args () if 'hello' in args [ 'template' ] . get ( 'plugin' , {}): self . reply ( { 'node' : { 'phase' : 'Succeeded' , 'message' : 'Hello template!' , 'outputs' : { 'parameters' : [{ 'name' : 'foo' , 'value' : 'bar' }]}}}) else : self . reply ({}) else : self . unsupported () if __name__ == '__main__' : httpd = HTTPServer (( '' , 4355 ), Plugin ) httpd . serve_forever () Tip : Plugins can be written in any language you can run as a container. Python is convenient because you can embed the script in the container. Some things to note here: You only need to implement the calls you need. Return 404 and it won't be called again. The path is the RPC method name. You should check that the Authorization header contains the same value as /var/run/argo/token . Return 403 if not The request body contains the template's input parameters. The response body may contain the node's result, including the phase (e.g. \"Succeeded\" or \"Failed\") and a message. If the response is {} , then the plugin is saying it cannot execute the plugin template, e.g. it is a Slack plugin, but the template is a Tekton job. If the status code is 404, then the plugin will not be called again. If you save the file as server.* , it will be copied to the sidecar container's args field. This is useful for building self-contained plugins in scripting languages like Python or Node.JS. Next, create a manifest named plugin.yaml : apiVersion : argoproj.io/v1alpha1 kind : ExecutorPlugin metadata : name : hello spec : sidecar : container : command : - python - -u # disables output buffering - -c image : python:alpine3.6 name : hello-executor-plugin ports : - containerPort : 4355 securityContext : runAsNonRoot : true runAsUser : 65534 # nobody resources : requests : memory : \"64Mi\" cpu : \"250m\" limits : memory : \"128Mi\" cpu : \"500m\" Build and install as follows: argo executor-plugin build . kubectl -n argo apply -f hello-executor-plugin-configmap.yaml Check your controller logs: level=info msg=\"Executor plugin added\" name=hello-controller-plugin Run this workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello- spec : entrypoint : main templates : - name : main plugin : hello : { } You'll see the workflow complete successfully.","title":"A Simple Python Plugin"},{"location":"executor_plugins/#discovery","text":"When a workflow is run, plugins are loaded from: The workflow's namespace. The Argo installation namespace (typically argo ). If two plugins have the same name, only the one in the workflow's namespace is loaded.","title":"Discovery"},{"location":"executor_plugins/#secrets","text":"If you interact with a third-party system, you'll need access to secrets. Don't put them in plugin.yaml . Use a secret: spec : sidecar : container : env : - name : URL valueFrom : secretKeyRef : name : slack-executor-plugin key : URL Refer to the Kubernetes Secret documentation for secret best practices and security considerations.","title":"Secrets"},{"location":"executor_plugins/#resources-security-context","text":"We made these mandatory, so no one can create a plugin that uses an unreasonable amount of memory, or run as root unless they deliberately do so: spec : sidecar : container : resources : requests : cpu : 100m memory : 32Mi limits : cpu : 200m memory : 64Mi securityContext : runAsNonRoot : true runAsUser : 1000","title":"Resources, Security Context"},{"location":"executor_plugins/#failure","text":"A plugin may fail as follows: Connection/socket error - considered transient. Timeout - considered transient. 404 error - method is not supported by the plugin, as a result the method will not be called again (in the same workflow). 503 error - considered transient. Other 4xx/5xx errors - considered fatal. Transient errors are retried, all other errors are considered fatal. Fatal errors will result in failed steps.","title":"Failure"},{"location":"executor_plugins/#re-queue","text":"It might be the case that the plugin can't finish straight away. E.g. it starts a long running task. When that happens, you return \"Pending\" or \"Running\" a and a re-queue time: { \"node\" : { \"phase\" : \"Running\" , \"message\" : \"Long-running task started\" }, \"requeue\" : \"2m\" } In this example, the task will be re-queued and template.execute will be called again in 2 minutes.","title":"Re-Queue"},{"location":"executor_plugins/#debugging","text":"You can find the plugin's log in the agent pod's sidecar, e.g.: kubectl -n argo logs ${ agentPodName } -c hello-executor-plugin","title":"Debugging"},{"location":"executor_plugins/#listing-plugins","text":"Because plugins are just config maps, you can list them using kubectl : kubectl get cm -l workflows.argoproj.io/configmap-type = ExecutorPlugin","title":"Listing Plugins"},{"location":"executor_plugins/#examples-and-community-contributed-plugins","text":"Plugin directory","title":"Examples and Community Contributed Plugins"},{"location":"executor_plugins/#publishing-your-plugin","text":"If you want to publish and share you plugin (we hope you do!), then submit a pull request to add it to the above directory.","title":"Publishing Your Plugin"},{"location":"executor_swagger/","text":"The API for an executor plugin. \u00b6 Informations \u00b6 Version \u00b6 0.0.1 Content negotiation \u00b6 URI Schemes \u00b6 http Consumes \u00b6 application/json Produces \u00b6 application/json All endpoints \u00b6 operations \u00b6 Method URI Name Summary POST /api/v1/template.execute execute template Paths \u00b6 execute template ( executeTemplate ) \u00b6 POST /api/v1/template.execute Parameters \u00b6 Name Source Type Go type Separator Required Default Description Body body ExecuteTemplateArgs models.ExecuteTemplateArgs \u2713 All responses \u00b6 Code Status Description Has headers Schema 200 OK schema Responses \u00b6 200 \u00b6 Status: OK Schema \u00b6 ExecuteTemplateReply Models \u00b6 AWSElasticBlockStoreVolumeSource \u00b6 An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). +optional readOnly boolean bool readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore +optional volumeID string string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Affinity \u00b6 Properties Name Type Go type Required Default Description Example nodeAffinity NodeAffinity NodeAffinity podAffinity PodAffinity PodAffinity podAntiAffinity PodAntiAffinity PodAntiAffinity Amount \u00b6 +kubebuilder:validation:Type=number interface{} AnyString \u00b6 It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric. Name Type Go type Default Description Example AnyString string string It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric. ArchiveStrategy \u00b6 ArchiveStrategy describes how to archive files/directory when saving artifacts Properties Name Type Go type Required Default Description Example none NoneStrategy NoneStrategy tar TarStrategy TarStrategy zip ZipStrategy ZipStrategy Arguments \u00b6 Arguments to a template Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters is the list of parameters to pass to the template or workflow +patchStrategy=merge +patchMergeKey=name Artifact \u00b6 Artifact indicates an artifact to place at a specified path Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source ArtifactGC \u00b6 ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Properties Name Type Go type Required Default Description Example podMetadata Metadata Metadata serviceAccountName string string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy ArtifactGCStrategy ArtifactGCStrategy ArtifactGCStrategy \u00b6 Name Type Go type Default Description Example ArtifactGCStrategy string string ArtifactLocation \u00b6 It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Properties Name Type Go type Required Default Description Example archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact oss OSSArtifact OSSArtifact raw RawArtifact RawArtifact s3 S3Artifact S3Artifact ArtifactPaths \u00b6 ArtifactPaths expands a step from a collection of artifacts Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source ArtifactoryArtifact \u00b6 ArtifactoryArtifact is the location of an artifactory artifact Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector url string string URL of the artifact usernameSecret SecretKeySelector SecretKeySelector Artifacts \u00b6 [] Artifact AzureArtifact \u00b6 AzureArtifact is the location of a an Azure Storage artifact Properties Name Type Go type Required Default Description Example accountKeySecret SecretKeySelector SecretKeySelector blob string string Blob is the blob name (i.e., path) in the container where the artifact resides container string string Container is the container where resources will be stored endpoint string string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults. AzureDataDiskCachingMode \u00b6 +enum Name Type Go type Default Description Example AzureDataDiskCachingMode string string +enum AzureDataDiskKind \u00b6 +enum Name Type Go type Default Description Example AzureDataDiskKind string string +enum AzureDiskVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example cachingMode AzureDataDiskCachingMode AzureDataDiskCachingMode diskName string string diskName is the Name of the data disk in the blob storage diskURI string string diskURI is the URI of data disk in the blob storage fsType string string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional kind AzureDataDiskKind AzureDataDiskKind readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional AzureFileVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretName string string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string string shareName is the azure share Name Backoff \u00b6 Backoff is a backoff strategy to use within retryStrategy Properties Name Type Go type Required Default Description Example duration string string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString IntOrString maxDuration string string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy BasicAuth \u00b6 BasicAuth describes the secret selectors required for basic authentication Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector CSIVolumeSource \u00b6 Represents a source location of a volume to mount, managed by an external CSI driver Properties Name Type Go type Required Default Description Example driver string string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string string fsType to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. +optional nodePublishSecretRef LocalObjectReference LocalObjectReference readOnly boolean bool readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). +optional volumeAttributes map of string map[string]string volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. +optional Cache \u00b6 Cache is the configuration for the type of cache to be used Properties Name Type Go type Required Default Description Example configMap ConfigMapKeySelector ConfigMapKeySelector Capabilities \u00b6 Properties Name Type Go type Required Default Description Example add [] Capability []Capability Added capabilities +optional drop [] Capability []Capability Removed capabilities +optional Capability \u00b6 Capability represent POSIX capabilities type Name Type Go type Default Description Example Capability string string Capability represent POSIX capabilities type CephFSVolumeSource \u00b6 Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example monitors []string []string monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretFile string string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional CinderVolumeSource \u00b6 A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional secretRef LocalObjectReference LocalObjectReference volumeID string string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md ClientCertAuth \u00b6 ClientCertAuth holds necessary information for client authentication via certificates Properties Name Type Go type Required Default Description Example clientCertSecret SecretKeySelector SecretKeySelector clientKeySecret SecretKeySelector SecretKeySelector ConfigMapEnvSource \u00b6 The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap must be defined +optional ConfigMapKeySelector \u00b6 +structType=atomic Properties Name Type Go type Required Default Description Example key string string The key to select. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap or its key must be defined +optional ConfigMapProjection \u00b6 The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional ConfigMapVolumeSource \u00b6 The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional Container \u00b6 Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional ContainerNode \u00b6 Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional dependencies []string []string env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional ContainerPort \u00b6 Properties Name Type Go type Required Default Description Example containerPort int32 (formatted integer) int32 Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string string What host IP to bind the external port to. +optional hostPort int32 (formatted integer) int32 Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. +optional name string string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. +optional protocol Protocol Protocol ContainerSetRetryStrategy \u00b6 Properties Name Type Go type Required Default Description Example duration string string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString IntOrString ContainerSetTemplate \u00b6 Properties Name Type Go type Required Default Description Example containers [] ContainerNode []*ContainerNode retryStrategy ContainerSetRetryStrategy ContainerSetRetryStrategy volumeMounts [] VolumeMount []*VolumeMount ContinueOn \u00b6 It can be specified if the workflow should continue when the pod errors, fails or both. Properties Name Type Go type Required Default Description Example error boolean bool +optional failed boolean bool +optional Counter \u00b6 Counter is a Counter prometheus metric Properties Name Type Go type Required Default Description Example value string string Value is the value of the metric CreateS3BucketOptions \u00b6 CreateS3BucketOptions options used to determine automatic automatic bucket-creation process Properties Name Type Go type Required Default Description Example objectLocking boolean bool ObjectLocking Enable object locking DAGTask \u00b6 DAGTask represents a node in the graph during DAG execution Properties Name Type Go type Required Default Description Example arguments Arguments Arguments continueOn ContinueOn ContinueOn dependencies []string []string Dependencies are name of other targets which this depends on depends string string Depends are name of other targets which this depends on hooks LifecycleHooks LifecycleHooks inline Template Template name string string Name is the name of the target onExit string string OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template. DEPRECATED: Use Hooks[exit].Template instead. template string string Name of template to execute templateRef TemplateRef TemplateRef when string string When is an expression in which the task should conditionally execute withItems [] Item []Item WithItems expands a task into multiple parallel tasks from the items in the list withParam string string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence Sequence DAGTemplate \u00b6 DAGTemplate is a template subtype for directed acyclic graph templates Properties Name Type Go type Required Default Description Example failFast boolean bool This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string string Target are one or more names of targets to execute in a DAG tasks [] DAGTask []*DAGTask Tasks are a list of DAG tasks +patchStrategy=merge +patchMergeKey=name Data \u00b6 Data is a data template Properties Name Type Go type Required Default Description Example source DataSource DataSource transformation Transformation Transformation DataSource \u00b6 DataSource sources external data into a data template Properties Name Type Go type Required Default Description Example artifactPaths ArtifactPaths ArtifactPaths DownwardAPIProjection \u00b6 Note that this is identical to a downwardAPI volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of DownwardAPIVolume file +optional DownwardAPIVolumeFile \u00b6 DownwardAPIVolumeFile represents information to create the file containing the pod field Properties Name Type Go type Required Default Description Example fieldRef ObjectFieldSelector ObjectFieldSelector mode int32 (formatted integer) int32 Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector ResourceFieldSelector DownwardAPIVolumeSource \u00b6 Downward API volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of downward API volume file +optional Duration \u00b6 Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. interface{} EmptyDirVolumeSource \u00b6 Empty directory volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example medium StorageMedium StorageMedium sizeLimit Quantity Quantity EnvFromSource \u00b6 EnvFromSource represents the source of a set of ConfigMaps Properties Name Type Go type Required Default Description Example configMapRef ConfigMapEnvSource ConfigMapEnvSource prefix string string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. +optional secretRef SecretEnvSource SecretEnvSource EnvVar \u00b6 Properties Name Type Go type Required Default Description Example name string string Name of the environment variable. Must be a C_IDENTIFIER. value string string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". +optional valueFrom EnvVarSource EnvVarSource EnvVarSource \u00b6 Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector fieldRef ObjectFieldSelector ObjectFieldSelector resourceFieldRef ResourceFieldSelector ResourceFieldSelector secretKeyRef SecretKeySelector SecretKeySelector EphemeralVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example volumeClaimTemplate PersistentVolumeClaimTemplate PersistentVolumeClaimTemplate ExecAction \u00b6 Properties Name Type Go type Required Default Description Example command []string []string Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions (' ', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. +optional ExecuteTemplateArgs \u00b6 Properties Name Type Go type Required Default Description Example template Template Template \u2713 workflow Workflow Workflow \u2713 ExecuteTemplateReply \u00b6 Properties Name Type Go type Required Default Description Example node NodeResult NodeResult requeue Duration Duration ExecutorConfig \u00b6 Properties Name Type Go type Required Default Description Example serviceAccountName string string ServiceAccountName specifies the service account name of the executor container. FCVolumeSource \u00b6 Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine +optional lun int32 (formatted integer) int32 lun is Optional: FC target lun number +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional targetWWNs []string []string targetWWNs is Optional: FC target worldwide names (WWNs) +optional wwids []string []string wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. +optional FieldsV1 \u00b6 Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff +protobuf.options.(gogoproto.goproto_stringer)=false interface{} FlexVolumeSource \u00b6 FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Properties Name Type Go type Required Default Description Example driver string string driver is the name of the driver to use for this volume. fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. +optional options map of string map[string]string options is Optional: this field holds extra command options if any. +optional readOnly boolean bool readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference FlockerVolumeSource \u00b6 One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example datasetName string string datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated +optional datasetUUID string string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset +optional GCEPersistentDiskVolumeSource \u00b6 A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional pdName string string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional GCSArtifact \u00b6 GCSArtifact is the location of a GCS artifact Properties Name Type Go type Required Default Description Example bucket string string Bucket is the name of the bucket key string string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector SecretKeySelector GRPCAction \u00b6 Properties Name Type Go type Required Default Description Example port int32 (formatted integer) int32 Port number of the gRPC service. Number must be in the range 1 to 65535. service string string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC. +optional +default=\"\" | | Gauge \u00b6 Gauge is a Gauge prometheus metric Properties Name Type Go type Required Default Description Example operation GaugeOperation GaugeOperation realtime boolean bool Realtime emits this metric in real time if applicable value string string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric GaugeOperation \u00b6 Name Type Go type Default Description Example GaugeOperation string string GitArtifact \u00b6 GitArtifact is the location of an git artifact Properties Name Type Go type Required Default Description Example branch string string Branch is the branch to fetch when SingleBranch is enabled depth uint64 (formatted integer) uint64 Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean bool DisableSubmodules disables submodules during git clone fetch []string []string Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean bool InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector SecretKeySelector repo string string Repo is the git repository revision string string Revision is the git commit, tag, branch to checkout singleBranch boolean bool SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector GitRepoVolumeSource \u00b6 DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Properties Name Type Go type Required Default Description Example directory string string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. +optional repository string string repository is the URL revision string string revision is the commit hash for the specified revision. +optional GlusterfsVolumeSource \u00b6 Glusterfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example endpoints string string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean bool readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod +optional HDFSArtifact \u00b6 HDFSArtifact is the location of an HDFS artifact Properties Name Type Go type Required Default Description Example addresses []string []string Addresses is accessible addresses of HDFS name nodes force boolean bool Force copies a file forcibly even if it exists hdfsUser string string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector SecretKeySelector krbConfigConfigMap ConfigMapKeySelector ConfigMapKeySelector krbKeytabSecret SecretKeySelector SecretKeySelector krbRealm string string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string string Path is a file path in HDFS HTTP \u00b6 Properties Name Type Go type Required Default Description Example body string string Body is content of the HTTP Request bodyFrom HTTPBodySource HTTPBodySource headers HTTPHeaders HTTPHeaders insecureSkipVerify boolean bool InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string string Method is HTTP methods for HTTP Request successCondition string string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds int64 (formatted integer) int64 TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string string URL of the HTTP Request HTTPArtifact \u00b6 HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Properties Name Type Go type Required Default Description Example auth HTTPAuth HTTPAuth headers [] Header []*Header Headers are an optional list of headers to send with HTTP requests for artifacts url string string URL of the artifact HTTPAuth \u00b6 Properties Name Type Go type Required Default Description Example basicAuth BasicAuth BasicAuth clientCert ClientCertAuth ClientCertAuth oauth2 OAuth2Auth OAuth2Auth HTTPBodySource \u00b6 Properties Name Type Go type Required Default Description Example bytes []uint8 (formatted integer) []uint8 HTTPGetAction \u00b6 Properties Name Type Go type Required Default Description Example host string string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. +optional httpHeaders [] HTTPHeader []*HTTPHeader Custom headers to set in the request. HTTP allows repeated headers. +optional path string string Path to access on the HTTP server. +optional port IntOrString IntOrString scheme URIScheme URIScheme HTTPHeader \u00b6 Properties Name Type Go type Required Default Description Example name string string value string string valueFrom HTTPHeaderSource HTTPHeaderSource HTTPHeaderSource \u00b6 Properties Name Type Go type Required Default Description Example secretKeyRef SecretKeySelector SecretKeySelector HTTPHeaders \u00b6 [] HTTPHeader Header \u00b6 Header indicate a key-value request header to be used when fetching artifacts over HTTP Properties Name Type Go type Required Default Description Example name string string Name is the header name value string string Value is the literal value to use for the header Histogram \u00b6 Histogram is a Histogram prometheus metric Properties Name Type Go type Required Default Description Example buckets [] Amount []Amount Buckets is a list of bucket divisors for the histogram value string string Value is the value of the metric HostAlias \u00b6 HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Properties Name Type Go type Required Default Description Example hostnames []string []string Hostnames for the above IP address. ip string string IP address of the host file entry. HostPathType \u00b6 +enum Name Type Go type Default Description Example HostPathType string string +enum HostPathVolumeSource \u00b6 Host path volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type HostPathType HostPathType ISCSIVolumeSource \u00b6 ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example chapAuthDiscovery boolean bool chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication +optional chapAuthSession boolean bool chapAuthSession defines whether support iSCSI Session CHAP authentication +optional fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine +optional initiatorName string string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. +optional iqn string string iqn is the target iSCSI Qualified Name. iscsiInterface string string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). +optional lun int32 (formatted integer) int32 lun represents iSCSI Target Lun number. portals []string []string portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. +optional secretRef LocalObjectReference LocalObjectReference targetPortal string string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). Inputs \u00b6 Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters are a list of parameters passed as inputs +patchStrategy=merge +patchMergeKey=name IntOrString \u00b6 +protobuf=true +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:openapi-gen=true Properties Name Type Go type Required Default Description Example IntVal int32 (formatted integer) int32 StrVal string string Type Type Type Item \u00b6 +protobuf.options.(gogoproto.goproto_stringer)=false +kubebuilder:validation:Type=object interface{} KeyToPath \u00b6 Properties Name Type Go type Required Default Description Example key string string key is the key to project. mode int32 (formatted integer) int32 mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. LabelSelector \u00b6 A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] LabelSelectorRequirement []*LabelSelectorRequirement matchExpressions is a list of label selector requirements. The requirements are ANDed. +optional matchLabels map of string map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed. +optional LabelSelectorOperator \u00b6 Name Type Go type Default Description Example LabelSelectorOperator string string LabelSelectorRequirement \u00b6 A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string key is the label key that the selector applies to. +patchMergeKey=key +patchStrategy=merge operator LabelSelectorOperator LabelSelectorOperator values []string []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +optional Lifecycle \u00b6 Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Properties Name Type Go type Required Default Description Example postStart LifecycleHandler LifecycleHandler preStop LifecycleHandler LifecycleHandler LifecycleHandler \u00b6 LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction httpGet HTTPGetAction HTTPGetAction tcpSocket TCPSocketAction TCPSocketAction LifecycleHook \u00b6 Properties Name Type Go type Required Default Description Example arguments Arguments Arguments expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef LifecycleHooks \u00b6 LifecycleHooks LocalObjectReference \u00b6 LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional ManagedFieldsEntry \u00b6 ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Properties Name Type Go type Required Default Description Example apiVersion string string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 manager string string Manager is an identifier of the workflow managing these fields. operation ManagedFieldsOperationType ManagedFieldsOperationType subresource string string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time ManagedFieldsOperationType \u00b6 Name Type Go type Default Description Example ManagedFieldsOperationType string string ManifestFrom \u00b6 Properties Name Type Go type Required Default Description Example artifact Artifact Artifact Memoize \u00b6 Memoization enables caching for the Outputs of the template Properties Name Type Go type Required Default Description Example cache Cache Cache key string string Key is the key to use as the caching key maxAge string string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored. Metadata \u00b6 Pod metdata Properties Name Type Go type Required Default Description Example annotations map of string map[string]string labels map of string map[string]string MetricLabel \u00b6 MetricLabel is a single label for a prometheus metric Properties Name Type Go type Required Default Description Example key string string value string string Metrics \u00b6 Metrics are a list of metrics emitted from a Workflow/Template Properties Name Type Go type Required Default Description Example prometheus [] Prometheus []*Prometheus Prometheus is a list of prometheus metrics to be emitted MountPropagationMode \u00b6 +enum Name Type Go type Default Description Example MountPropagationMode string string +enum Mutex \u00b6 Mutex holds Mutex configuration Properties Name Type Go type Required Default Description Example name string string name of the mutex namespace string string \"[namespace of workflow]\" NFSVolumeSource \u00b6 NFS volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean bool readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs +optional server string string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs NodeAffinity \u00b6 Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] PreferredSchedulingTerm []*PreferredSchedulingTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution NodeSelector NodeSelector NodePhase \u00b6 Name Type Go type Default Description Example NodePhase string string NodeResult \u00b6 Properties Name Type Go type Required Default Description Example message string string outputs Outputs Outputs phase NodePhase NodePhase progress Progress Progress NodeSelector \u00b6 A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. +structType=atomic Properties Name Type Go type Required Default Description Example nodeSelectorTerms [] NodeSelectorTerm []*NodeSelectorTerm Required. A list of node selector terms. The terms are ORed. NodeSelectorOperator \u00b6 A node selector operator is the set of operators that can be used in a node selector requirement. +enum Name Type Go type Default Description Example NodeSelectorOperator string string A node selector operator is the set of operators that can be used in a node selector requirement. +enum NodeSelectorRequirement \u00b6 A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string The label key that the selector applies to. operator NodeSelectorOperator NodeSelectorOperator values []string []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +optional NodeSelectorTerm \u00b6 A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's labels. +optional matchFields [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's fields. +optional NoneStrategy \u00b6 NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. interface{} OAuth2Auth \u00b6 OAuth2Auth holds all information for client authentication via OAuth2 tokens Properties Name Type Go type Required Default Description Example clientIDSecret SecretKeySelector SecretKeySelector clientSecretSecret SecretKeySelector SecretKeySelector endpointParams [] OAuth2EndpointParam []*OAuth2EndpointParam scopes []string []string tokenURLSecret SecretKeySelector SecretKeySelector OAuth2EndpointParam \u00b6 EndpointParam is for requesting optional fields that should be sent in the oauth request Properties Name Type Go type Required Default Description Example key string string Name is the header name value string string Value is the literal value to use for the header OSSArtifact \u00b6 OSSArtifact is the location of an Alibaba Cloud OSS artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket createBucketIfNotPresent boolean bool CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string string Endpoint is the hostname of the bucket endpoint key string string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule OSSLifecycleRule secretKeySecret SecretKeySelector SecretKeySelector securityToken string string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults. OSSLifecycleRule \u00b6 OSSLifecycleRule specifies how to manage bucket's lifecycle Properties Name Type Go type Required Default Description Example markDeletionAfterDays int32 (formatted integer) int32 MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays int32 (formatted integer) int32 MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type ObjectFieldSelector \u00b6 +structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". +optional fieldPath string string Path of the field to select in the specified API version. ObjectMeta \u00b6 Properties Name Type Go type Required Default Description Example name string string namespace string string uid string string Outputs \u00b6 Outputs hold parameters, artifacts, and results from a step Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts exitCode string string ExitCode holds the exit code of a script template parameters [] Parameter []*Parameter Parameters holds the list of output parameters produced by a step +patchStrategy=merge +patchMergeKey=name result string string Result holds the result (stdout) of a script template OwnerReference \u00b6 OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. +structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string API version of the referent. blockOwnerDeletion boolean bool If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. +optional controller boolean bool If true, this reference points to the managing controller. +optional kind string string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid UID UID ParallelSteps \u00b6 +kubebuilder:validation:Type=array interface{} Parameter \u00b6 Parameter indicate a passed string parameter to a service template with an optional default value Properties Name Type Go type Required Default Description Example default AnyString AnyString description AnyString AnyString enum [] AnyString []AnyString Enum holds a list of string values to choose from, for the actual value of the parameter globalName string string GlobalName exports an output parameter to the global scope, making it available as '{{workflow.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string string Name is the parameter name value AnyString AnyString valueFrom ValueFrom ValueFrom PersistentVolumeAccessMode \u00b6 +enum Name Type Go type Default Description Example PersistentVolumeAccessMode string string +enum PersistentVolumeClaimSpec \u00b6 PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Properties Name Type Go type Required Default Description Example accessModes [] PersistentVolumeAccessMode []PersistentVolumeAccessMode accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 +optional dataSource TypedLocalObjectReference TypedLocalObjectReference dataSourceRef TypedLocalObjectReference TypedLocalObjectReference resources ResourceRequirements ResourceRequirements selector LabelSelector LabelSelector storageClassName string string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 +optional volumeMode PersistentVolumeMode PersistentVolumeMode volumeName string string volumeName is the binding reference to the PersistentVolume backing this claim. +optional PersistentVolumeClaimTemplate \u00b6 PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Properties Name Type Go type Required Default Description Example annotations map of string map[string]string Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations +optional clusterName string string Deprecated: ClusterName is a legacy field that was always cleared by the system and never used; it will be removed completely in 1.25. The name in the go struct is changed to help clients detect accidental use. +optional | | | creationTimestamp | Time | Time | | | | | | deletionGracePeriodSeconds | int64 (formatted integer)| int64 | | | Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. +optional | | | deletionTimestamp | Time | Time | | | | | | finalizers | []string| []string | | | Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. +optional +patchStrategy=merge | | | generateName | string| string | | | GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency +optional | | | generation | int64 (formatted integer)| int64 | | | A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. +optional | | | labels | map of string| map[string]string | | | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels +optional | | | managedFields | [] ManagedFieldsEntry | []*ManagedFieldsEntry | | | ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. +optional | | | name | string| string | | | Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names +optional | | | namespace | string| string | | | Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces +optional | | | ownerReferences | [] OwnerReference | []*OwnerReference | | | List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. +optional +patchMergeKey=uid +patchStrategy=merge | | | resourceVersion | string| string | | | An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency +optional | | | selfLink | string| string | | | Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. +optional | | | spec | PersistentVolumeClaimSpec | PersistentVolumeClaimSpec | | | | | | uid | UID | UID | | | | | PersistentVolumeClaimVolumeSource \u00b6 This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Properties Name Type Go type Required Default Description Example claimName string string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean bool readOnly Will force the ReadOnly setting in VolumeMounts. Default false. +optional PersistentVolumeMode \u00b6 +enum Name Type Go type Default Description Example PersistentVolumeMode string string +enum PhotonPersistentDiskVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string string pdID is the ID that identifies Photon Controller persistent disk Plugin \u00b6 Plugin is an Object with exactly one key interface{} PodAffinity \u00b6 Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional PodAffinityTerm \u00b6 Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running Properties Name Type Go type Required Default Description Example labelSelector LabelSelector LabelSelector namespaceSelector LabelSelector LabelSelector namespaces []string []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\". +optional topologyKey string string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. PodAntiAffinity \u00b6 Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional PodFSGroupChangePolicy \u00b6 PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum Name Type Go type Default Description Example PodFSGroupChangePolicy string string PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum PodSecurityContext \u00b6 Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Properties Name Type Go type Required Default Description Example fsGroup int64 (formatted integer) int64 A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: The owning GID will be the FSGroup The setgid bit is set (new files created in the volume will be owned by FSGroup) The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. +optional | | | fsGroupChangePolicy | PodFSGroupChangePolicy | PodFSGroupChangePolicy | | | | | | runAsGroup | int64 (formatted integer)| int64 | | | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | runAsNonRoot | boolean| bool | | | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional | | | runAsUser | int64 (formatted integer)| int64 | | | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | seLinuxOptions | SELinuxOptions | SELinuxOptions | | | | | | seccompProfile | SeccompProfile | SeccompProfile | | | | | | supplementalGroups | []int64 (formatted integer)| []int64 | | | A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. +optional | | | sysctls | [] Sysctl | []*Sysctl | | | Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. +optional | | | windowsOptions | WindowsSecurityContextOptions | WindowsSecurityContextOptions | | | | | PortworxVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional volumeID string string volumeID uniquely identifies a Portworx volume PreferredSchedulingTerm \u00b6 An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Properties Name Type Go type Required Default Description Example preference NodeSelectorTerm NodeSelectorTerm weight int32 (formatted integer) int32 Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. Probe \u00b6 Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction failureThreshold int32 (formatted integer) int32 Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. +optional grpc GRPCAction GRPCAction httpGet HTTPGetAction HTTPGetAction initialDelaySeconds int32 (formatted integer) int32 Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional periodSeconds int32 (formatted integer) int32 How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. +optional successThreshold int32 (formatted integer) int32 Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. +optional tcpSocket TCPSocketAction TCPSocketAction terminationGracePeriodSeconds int64 (formatted integer) int64 Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. +optional timeoutSeconds int32 (formatted integer) int32 Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional ProcMountType \u00b6 +enum Name Type Go type Default Description Example ProcMountType string string +enum Progress \u00b6 Name Type Go type Default Description Example Progress string string ProjectedVolumeSource \u00b6 Represents a projected volume source Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional sources [] VolumeProjection []*VolumeProjection sources is the list of volume projections +optional Prometheus \u00b6 Prometheus is a prometheus metric to be emitted Properties Name Type Go type Required Default Description Example counter Counter Counter gauge Gauge Gauge help string string Help is a string that describes the metric histogram Histogram Histogram labels [] MetricLabel []*MetricLabel Labels is a list of metric labels name string string Name is the name of the metric when string string When is a conditional statement that decides when to emit the metric Protocol \u00b6 +enum Name Type Go type Default Description Example Protocol string string +enum PullPolicy \u00b6 PullPolicy describes a policy for if/when to pull a container image +enum Name Type Go type Default Description Example PullPolicy string string PullPolicy describes a policy for if/when to pull a container image +enum Quantity \u00b6 The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. +protobuf=true +protobuf.embed=string +protobuf.options.marshal=false +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:deepcopy-gen=true +k8s:openapi-gen=true interface{} QuobyteVolumeSource \u00b6 Quobyte volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example group string string group to map volume access to Default is no group +optional readOnly boolean bool readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. +optional registry string string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin +optional user string string user to map volume access to Defaults to serivceaccount user +optional volume string string volume is a string that references an already created Quobyte volume by name. RBDVolumeSource \u00b6 RBD volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine +optional image string string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional monitors []string []string monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional RawArtifact \u00b6 RawArtifact allows raw string content to be placed as an artifact in a container Properties Name Type Go type Required Default Description Example data string string Data is the string contents of the artifact ResourceFieldSelector \u00b6 ResourceFieldSelector represents container resources (cpu, memory) and their output format +structType=atomic Properties Name Type Go type Required Default Description Example containerName string string Container name: required for volumes, optional for env vars +optional divisor Quantity Quantity resource string string Required: resource to select ResourceList \u00b6 ResourceList ResourceRequirements \u00b6 Properties Name Type Go type Required Default Description Example limits ResourceList ResourceList requests ResourceList ResourceList ResourceTemplate \u00b6 ResourceTemplate is a template subtype to manipulate kubernetes resources Properties Name Type Go type Required Default Description Example action string string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags []string []string Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom mergeStrategy string string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean bool SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step RetryAffinity \u00b6 Properties Name Type Go type Required Default Description Example nodeAntiAffinity RetryNodeAntiAffinity RetryNodeAntiAffinity RetryNodeAntiAffinity \u00b6 In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\". interface{} RetryPolicy \u00b6 Name Type Go type Default Description Example RetryPolicy string string RetryStrategy \u00b6 RetryStrategy provides controls on how to retry a workflow step Properties Name Type Go type Required Default Description Example affinity RetryAffinity RetryAffinity backoff Backoff Backoff expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString IntOrString retryPolicy RetryPolicy RetryPolicy S3Artifact \u00b6 S3Artifact is the location of an S3 artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket caSecret SecretKeySelector SecretKeySelector createBucketIfNotPresent CreateS3BucketOptions CreateS3BucketOptions encryptionOptions S3EncryptionOptions S3EncryptionOptions endpoint string string Endpoint is the hostname of the bucket endpoint insecure boolean bool Insecure will connect to the service with TLS key string string Key is the key in the bucket where the artifact resides region string string Region contains the optional bucket region roleARN string string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySelector useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults. S3EncryptionOptions \u00b6 S3EncryptionOptions used to determine encryption options during s3 operations Properties Name Type Go type Required Default Description Example enableEncryption boolean bool EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector SecretKeySelector SELinuxOptions \u00b6 SELinuxOptions are the labels to be applied to the container Properties Name Type Go type Required Default Description Example level string string Level is SELinux level label that applies to the container. +optional role string string Role is a SELinux role label that applies to the container. +optional type string string Type is a SELinux type label that applies to the container. +optional user string string User is a SELinux user label that applies to the container. +optional ScaleIOVolumeSource \u00b6 ScaleIOVolumeSource represents a persistent ScaleIO volume Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". +optional gateway string string gateway is the host address of the ScaleIO API Gateway. protectionDomain string string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. +optional readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference sslEnabled boolean bool sslEnabled Flag enable/disable SSL communication with Gateway, default false +optional storageMode string string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. +optional storagePool string string storagePool is the ScaleIO Storage Pool associated with the protection domain. +optional system string string system is the name of the storage system as configured in ScaleIO. volumeName string string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. ScriptTemplate \u00b6 ScriptTemplate is a template subtype to enable scripting through code steps Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext source string string Source contains the source code of the script to execute startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional SeccompProfile \u00b6 Only one profile source may be set. +union Properties Name Type Go type Required Default Description Example localhostProfile string string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". +optional type SeccompProfileType SeccompProfileType SeccompProfileType \u00b6 +enum Name Type Go type Default Description Example SeccompProfileType string string +enum SecretEnvSource \u00b6 The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret must be defined +optional SecretKeySelector \u00b6 +structType=atomic Properties Name Type Go type Required Default Description Example key string string The key of the secret to select from. Must be a valid secret key. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret or its key must be defined +optional SecretProjection \u00b6 The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional field specify whether the Secret or its key must be defined +optional SecretVolumeSource \u00b6 The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional optional boolean bool optional field specify whether the Secret or its keys must be defined +optional secretName string string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret +optional SecurityContext \u00b6 Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Properties Name Type Go type Required Default Description Example allowPrivilegeEscalation boolean bool AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. +optional capabilities Capabilities Capabilities privileged boolean bool Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. +optional procMount ProcMountType ProcMountType readOnlyRootFilesystem boolean bool Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. +optional runAsGroup int64 (formatted integer) int64 The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional runAsNonRoot boolean bool Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional runAsUser int64 (formatted integer) int64 The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional seLinuxOptions SELinuxOptions SELinuxOptions seccompProfile SeccompProfile SeccompProfile windowsOptions WindowsSecurityContextOptions WindowsSecurityContextOptions SemaphoreRef \u00b6 SemaphoreRef is a reference of Semaphore Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector namespace string string \"[namespace of workflow]\" Sequence \u00b6 Sequence expands a workflow step into numeric range Properties Name Type Go type Required Default Description Example count IntOrString IntOrString end IntOrString IntOrString format string string Format is a printf format string to format the value in the sequence start IntOrString IntOrString ServiceAccountTokenProjection \u00b6 ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Properties Name Type Go type Required Default Description Example audience string string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. +optional expirationSeconds int64 (formatted integer) int64 expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. +optional path string string path is the path relative to the mount point of the file to project the token into. StorageMedium \u00b6 Name Type Go type Default Description Example StorageMedium string string StorageOSVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference volumeName string string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. +optional SuppliedValueFrom \u00b6 interface{} SuspendTemplate \u00b6 SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Properties Name Type Go type Required Default Description Example duration string string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" Synchronization \u00b6 Synchronization holds synchronization lock configuration Properties Name Type Go type Required Default Description Example mutex Mutex Mutex semaphore SemaphoreRef SemaphoreRef Sysctl \u00b6 Sysctl defines a kernel parameter to be set Properties Name Type Go type Required Default Description Example name string string Name of a property to set value string string Value of a property to set TCPSocketAction \u00b6 TCPSocketAction describes an action based on opening a socket Properties Name Type Go type Required Default Description Example host string string Optional: Host name to connect to, defaults to the pod IP. +optional port IntOrString IntOrString TaintEffect \u00b6 +enum Name Type Go type Default Description Example TaintEffect string string +enum TarStrategy \u00b6 TarStrategy will tar and gzip the file or directory when saving Properties Name Type Go type Required Default Description Example compressionLevel int32 (formatted integer) int32 CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression. Template \u00b6 Template is a reusable and composable unit of execution in a workflow Properties Name Type Go type Required Default Description Example activeDeadlineSeconds IntOrString IntOrString affinity Affinity Affinity archiveLocation ArtifactLocation ArtifactLocation automountServiceAccountToken boolean bool AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container containerSet ContainerSetTemplate ContainerSetTemplate daemon boolean bool Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAGTemplate data Data Data executor ExecutorConfig ExecutorConfig failFast boolean bool FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases [] HostAlias []*HostAlias HostAliases is an optional list of hosts and IPs that will be injected into the pod spec +patchStrategy=merge +patchMergeKey=ip http HTTP HTTP initContainers [] UserContainer []*UserContainer InitContainers is a list of containers which run before the main container. +patchStrategy=merge +patchMergeKey=name inputs Inputs Inputs memoize Memoize Memoize metadata Metadata Metadata metrics Metrics Metrics name string string Name is the name of the template nodeSelector map of string map[string]string NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs parallelism int64 (formatted integer) int64 Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin podSpecPatch string string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority int32 (formatted integer) int32 Priority to apply to workflow pods. priorityClassName string string PriorityClassName to apply to workflow pods. resource ResourceTemplate ResourceTemplate retryStrategy RetryStrategy RetryStrategy schedulerName string string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. +optional script ScriptTemplate ScriptTemplate securityContext PodSecurityContext PodSecurityContext serviceAccountName string string ServiceAccountName to apply to workflow pods sidecars [] UserContainer []*UserContainer Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes +patchStrategy=merge +patchMergeKey=name steps [] ParallelSteps []ParallelSteps Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate SuspendTemplate synchronization Synchronization Synchronization timeout string string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations [] Toleration []*Toleration Tolerations to apply to workflow pods. +patchStrategy=merge +patchMergeKey=key volumes [] Volume []*Volume Volumes is a list of volumes that can be mounted by containers in a template. +patchStrategy=merge +patchMergeKey=name TemplateRef \u00b6 Properties Name Type Go type Required Default Description Example clusterScope boolean bool ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string string Name is the resource name of the template. template string string Template is the name of referred template in the resource. TerminationMessagePolicy \u00b6 +enum Name Type Go type Default Description Example TerminationMessagePolicy string string +enum Time \u00b6 +protobuf.options.marshal=false +protobuf.as=Timestamp +protobuf.options.(gogoproto.goproto_stringer)=false interface{} Toleration \u00b6 The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . Properties Name Type Go type Required Default Description Example effect TaintEffect TaintEffect key string string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. +optional operator TolerationOperator TolerationOperator tolerationSeconds int64 (formatted integer) int64 TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. +optional value string string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. +optional TolerationOperator \u00b6 +enum Name Type Go type Default Description Example TolerationOperator string string +enum Transformation \u00b6 [] TransformationStep TransformationStep \u00b6 Properties Name Type Go type Required Default Description Example expression string string Expression defines an expr expression to apply Type \u00b6 Name Type Go type Default Description Example Type int64 (formatted integer) int64 TypedLocalObjectReference \u00b6 TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example apiGroup string string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. +optional kind string string Kind is the type of resource being referenced name string string Name is the name of resource being referenced UID \u00b6 UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. Name Type Go type Default Description Example UID string string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. URIScheme \u00b6 URIScheme identifies the scheme used for connection to a host for Get actions +enum Name Type Go type Default Description Example URIScheme string string URIScheme identifies the scheme used for connection to a host for Get actions +enum UserContainer \u00b6 Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe mirrorVolumeMounts boolean bool MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional ValueFrom \u00b6 ValueFrom describes a location in which to obtain the value to a parameter Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector default AnyString AnyString event string string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string string JQFilter expression against the resource object in resource templates jsonPath string string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom SuppliedValueFrom Volume \u00b6 Properties Name Type Go type Required Default Description Example awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStoreVolumeSource azureDisk AzureDiskVolumeSource AzureDiskVolumeSource azureFile AzureFileVolumeSource AzureFileVolumeSource cephfs CephFSVolumeSource CephFSVolumeSource cinder CinderVolumeSource CinderVolumeSource configMap ConfigMapVolumeSource ConfigMapVolumeSource csi CSIVolumeSource CSIVolumeSource downwardAPI DownwardAPIVolumeSource DownwardAPIVolumeSource emptyDir EmptyDirVolumeSource EmptyDirVolumeSource ephemeral EphemeralVolumeSource EphemeralVolumeSource fc FCVolumeSource FCVolumeSource flexVolume FlexVolumeSource FlexVolumeSource flocker FlockerVolumeSource FlockerVolumeSource gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDiskVolumeSource gitRepo GitRepoVolumeSource GitRepoVolumeSource glusterfs GlusterfsVolumeSource GlusterfsVolumeSource hostPath HostPathVolumeSource HostPathVolumeSource iscsi ISCSIVolumeSource ISCSIVolumeSource name string string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFSVolumeSource persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDiskVolumeSource portworxVolume PortworxVolumeSource PortworxVolumeSource projected ProjectedVolumeSource ProjectedVolumeSource quobyte QuobyteVolumeSource QuobyteVolumeSource rbd RBDVolumeSource RBDVolumeSource scaleIO ScaleIOVolumeSource ScaleIOVolumeSource secret SecretVolumeSource SecretVolumeSource storageos StorageOSVolumeSource StorageOSVolumeSource vsphereVolume VsphereVirtualDiskVolumeSource VsphereVirtualDiskVolumeSource VolumeDevice \u00b6 Properties Name Type Go type Required Default Description Example devicePath string string devicePath is the path inside of the container that the device will be mapped to. name string string name must match the name of a persistentVolumeClaim in the pod VolumeMount \u00b6 Properties Name Type Go type Required Default Description Example mountPath string string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation MountPropagationMode MountPropagationMode name string string This must match the Name of a Volume. readOnly boolean bool Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. +optional subPath string string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). +optional subPathExpr string string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. +optional VolumeProjection \u00b6 Projection that may be projected along with other supported volume types Properties Name Type Go type Required Default Description Example configMap ConfigMapProjection ConfigMapProjection downwardAPI DownwardAPIProjection DownwardAPIProjection secret SecretProjection SecretProjection serviceAccountToken ServiceAccountTokenProjection ServiceAccountTokenProjection VsphereVirtualDiskVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional storagePolicyID string string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. +optional storagePolicyName string string storagePolicyName is the storage Policy Based Management (SPBM) profile name. +optional volumePath string string volumePath is the path that identifies vSphere volume vmdk WeightedPodAffinityTerm \u00b6 The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Properties Name Type Go type Required Default Description Example podAffinityTerm PodAffinityTerm PodAffinityTerm weight int32 (formatted integer) int32 weight associated with matching the corresponding podAffinityTerm, in the range 1-100. WindowsSecurityContextOptions \u00b6 Properties Name Type Go type Required Default Description Example gmsaCredentialSpec string string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. +optional gmsaCredentialSpecName string string GMSACredentialSpecName is the name of the GMSA credential spec to use. +optional hostProcess boolean bool HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. +optional runAsUserName string string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional Workflow \u00b6 Properties Name Type Go type Required Default Description Example metadata ObjectMeta ObjectMeta \u2713 ZipStrategy \u00b6 ZipStrategy will unzip zipped input artifacts interface{}","title":"The API for an executor plugin."},{"location":"executor_swagger/#the-api-for-an-executor-plugin","text":"","title":"The API for an executor plugin."},{"location":"executor_swagger/#informations","text":"","title":"Informations"},{"location":"executor_swagger/#version","text":"0.0.1","title":"Version"},{"location":"executor_swagger/#content-negotiation","text":"","title":"Content negotiation"},{"location":"executor_swagger/#uri-schemes","text":"http","title":"URI Schemes"},{"location":"executor_swagger/#consumes","text":"application/json","title":"Consumes"},{"location":"executor_swagger/#produces","text":"application/json","title":"Produces"},{"location":"executor_swagger/#all-endpoints","text":"","title":"All endpoints"},{"location":"executor_swagger/#operations","text":"Method URI Name Summary POST /api/v1/template.execute execute template","title":"operations"},{"location":"executor_swagger/#paths","text":"","title":"Paths"},{"location":"executor_swagger/#execute-template-executetemplate","text":"POST /api/v1/template.execute","title":" execute template (executeTemplate)"},{"location":"executor_swagger/#parameters","text":"Name Source Type Go type Separator Required Default Description Body body ExecuteTemplateArgs models.ExecuteTemplateArgs \u2713","title":"Parameters"},{"location":"executor_swagger/#all-responses","text":"Code Status Description Has headers Schema 200 OK schema","title":"All responses"},{"location":"executor_swagger/#responses","text":"","title":"Responses"},{"location":"executor_swagger/#200","text":"Status: OK","title":" 200"},{"location":"executor_swagger/#schema","text":"ExecuteTemplateReply","title":" Schema"},{"location":"executor_swagger/#models","text":"","title":"Models"},{"location":"executor_swagger/#awselasticblockstorevolumesource","text":"An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). +optional readOnly boolean bool readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore +optional volumeID string string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore","title":" AWSElasticBlockStoreVolumeSource"},{"location":"executor_swagger/#affinity","text":"Properties Name Type Go type Required Default Description Example nodeAffinity NodeAffinity NodeAffinity podAffinity PodAffinity PodAffinity podAntiAffinity PodAntiAffinity PodAntiAffinity","title":" Affinity"},{"location":"executor_swagger/#amount","text":"+kubebuilder:validation:Type=number interface{}","title":" Amount"},{"location":"executor_swagger/#anystring","text":"It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric. Name Type Go type Default Description Example AnyString string string It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric.","title":" AnyString"},{"location":"executor_swagger/#archivestrategy","text":"ArchiveStrategy describes how to archive files/directory when saving artifacts Properties Name Type Go type Required Default Description Example none NoneStrategy NoneStrategy tar TarStrategy TarStrategy zip ZipStrategy ZipStrategy","title":" ArchiveStrategy"},{"location":"executor_swagger/#arguments","text":"Arguments to a template Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters is the list of parameters to pass to the template or workflow +patchStrategy=merge +patchMergeKey=name","title":" Arguments"},{"location":"executor_swagger/#artifact","text":"Artifact indicates an artifact to place at a specified path Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source","title":" Artifact"},{"location":"executor_swagger/#artifactgc","text":"ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Properties Name Type Go type Required Default Description Example podMetadata Metadata Metadata serviceAccountName string string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy ArtifactGCStrategy ArtifactGCStrategy","title":" ArtifactGC"},{"location":"executor_swagger/#artifactgcstrategy","text":"Name Type Go type Default Description Example ArtifactGCStrategy string string","title":" ArtifactGCStrategy"},{"location":"executor_swagger/#artifactlocation","text":"It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Properties Name Type Go type Required Default Description Example archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact oss OSSArtifact OSSArtifact raw RawArtifact RawArtifact s3 S3Artifact S3Artifact","title":" ArtifactLocation"},{"location":"executor_swagger/#artifactpaths","text":"ArtifactPaths expands a step from a collection of artifacts Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source","title":" ArtifactPaths"},{"location":"executor_swagger/#artifactoryartifact","text":"ArtifactoryArtifact is the location of an artifactory artifact Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector url string string URL of the artifact usernameSecret SecretKeySelector SecretKeySelector","title":" ArtifactoryArtifact"},{"location":"executor_swagger/#artifacts","text":"[] Artifact","title":" Artifacts"},{"location":"executor_swagger/#azureartifact","text":"AzureArtifact is the location of a an Azure Storage artifact Properties Name Type Go type Required Default Description Example accountKeySecret SecretKeySelector SecretKeySelector blob string string Blob is the blob name (i.e., path) in the container where the artifact resides container string string Container is the container where resources will be stored endpoint string string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":" AzureArtifact"},{"location":"executor_swagger/#azuredatadiskcachingmode","text":"+enum Name Type Go type Default Description Example AzureDataDiskCachingMode string string +enum","title":" AzureDataDiskCachingMode"},{"location":"executor_swagger/#azuredatadiskkind","text":"+enum Name Type Go type Default Description Example AzureDataDiskKind string string +enum","title":" AzureDataDiskKind"},{"location":"executor_swagger/#azurediskvolumesource","text":"Properties Name Type Go type Required Default Description Example cachingMode AzureDataDiskCachingMode AzureDataDiskCachingMode diskName string string diskName is the Name of the data disk in the blob storage diskURI string string diskURI is the URI of data disk in the blob storage fsType string string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional kind AzureDataDiskKind AzureDataDiskKind readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional","title":" AzureDiskVolumeSource"},{"location":"executor_swagger/#azurefilevolumesource","text":"Properties Name Type Go type Required Default Description Example readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretName string string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string string shareName is the azure share Name","title":" AzureFileVolumeSource"},{"location":"executor_swagger/#backoff","text":"Backoff is a backoff strategy to use within retryStrategy Properties Name Type Go type Required Default Description Example duration string string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString IntOrString maxDuration string string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy","title":" Backoff"},{"location":"executor_swagger/#basicauth","text":"BasicAuth describes the secret selectors required for basic authentication Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector","title":" BasicAuth"},{"location":"executor_swagger/#csivolumesource","text":"Represents a source location of a volume to mount, managed by an external CSI driver Properties Name Type Go type Required Default Description Example driver string string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string string fsType to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. +optional nodePublishSecretRef LocalObjectReference LocalObjectReference readOnly boolean bool readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). +optional volumeAttributes map of string map[string]string volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. +optional","title":" CSIVolumeSource"},{"location":"executor_swagger/#cache","text":"Cache is the configuration for the type of cache to be used Properties Name Type Go type Required Default Description Example configMap ConfigMapKeySelector ConfigMapKeySelector","title":" Cache"},{"location":"executor_swagger/#capabilities","text":"Properties Name Type Go type Required Default Description Example add [] Capability []Capability Added capabilities +optional drop [] Capability []Capability Removed capabilities +optional","title":" Capabilities"},{"location":"executor_swagger/#capability","text":"Capability represent POSIX capabilities type Name Type Go type Default Description Example Capability string string Capability represent POSIX capabilities type","title":" Capability"},{"location":"executor_swagger/#cephfsvolumesource","text":"Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example monitors []string []string monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretFile string string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional","title":" CephFSVolumeSource"},{"location":"executor_swagger/#cindervolumesource","text":"A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional secretRef LocalObjectReference LocalObjectReference volumeID string string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md","title":" CinderVolumeSource"},{"location":"executor_swagger/#clientcertauth","text":"ClientCertAuth holds necessary information for client authentication via certificates Properties Name Type Go type Required Default Description Example clientCertSecret SecretKeySelector SecretKeySelector clientKeySecret SecretKeySelector SecretKeySelector","title":" ClientCertAuth"},{"location":"executor_swagger/#configmapenvsource","text":"The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap must be defined +optional","title":" ConfigMapEnvSource"},{"location":"executor_swagger/#configmapkeyselector","text":"+structType=atomic Properties Name Type Go type Required Default Description Example key string string The key to select. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap or its key must be defined +optional","title":" ConfigMapKeySelector"},{"location":"executor_swagger/#configmapprojection","text":"The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional","title":" ConfigMapProjection"},{"location":"executor_swagger/#configmapvolumesource","text":"The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional","title":" ConfigMapVolumeSource"},{"location":"executor_swagger/#container","text":"Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" Container"},{"location":"executor_swagger/#containernode","text":"Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional dependencies []string []string env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" ContainerNode"},{"location":"executor_swagger/#containerport","text":"Properties Name Type Go type Required Default Description Example containerPort int32 (formatted integer) int32 Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string string What host IP to bind the external port to. +optional hostPort int32 (formatted integer) int32 Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. +optional name string string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. +optional protocol Protocol Protocol","title":" ContainerPort"},{"location":"executor_swagger/#containersetretrystrategy","text":"Properties Name Type Go type Required Default Description Example duration string string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString IntOrString","title":" ContainerSetRetryStrategy"},{"location":"executor_swagger/#containersettemplate","text":"Properties Name Type Go type Required Default Description Example containers [] ContainerNode []*ContainerNode retryStrategy ContainerSetRetryStrategy ContainerSetRetryStrategy volumeMounts [] VolumeMount []*VolumeMount","title":" ContainerSetTemplate"},{"location":"executor_swagger/#continueon","text":"It can be specified if the workflow should continue when the pod errors, fails or both. Properties Name Type Go type Required Default Description Example error boolean bool +optional failed boolean bool +optional","title":" ContinueOn"},{"location":"executor_swagger/#counter","text":"Counter is a Counter prometheus metric Properties Name Type Go type Required Default Description Example value string string Value is the value of the metric","title":" Counter"},{"location":"executor_swagger/#creates3bucketoptions","text":"CreateS3BucketOptions options used to determine automatic automatic bucket-creation process Properties Name Type Go type Required Default Description Example objectLocking boolean bool ObjectLocking Enable object locking","title":" CreateS3BucketOptions"},{"location":"executor_swagger/#dagtask","text":"DAGTask represents a node in the graph during DAG execution Properties Name Type Go type Required Default Description Example arguments Arguments Arguments continueOn ContinueOn ContinueOn dependencies []string []string Dependencies are name of other targets which this depends on depends string string Depends are name of other targets which this depends on hooks LifecycleHooks LifecycleHooks inline Template Template name string string Name is the name of the target onExit string string OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template. DEPRECATED: Use Hooks[exit].Template instead. template string string Name of template to execute templateRef TemplateRef TemplateRef when string string When is an expression in which the task should conditionally execute withItems [] Item []Item WithItems expands a task into multiple parallel tasks from the items in the list withParam string string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence Sequence","title":" DAGTask"},{"location":"executor_swagger/#dagtemplate","text":"DAGTemplate is a template subtype for directed acyclic graph templates Properties Name Type Go type Required Default Description Example failFast boolean bool This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string string Target are one or more names of targets to execute in a DAG tasks [] DAGTask []*DAGTask Tasks are a list of DAG tasks +patchStrategy=merge +patchMergeKey=name","title":" DAGTemplate"},{"location":"executor_swagger/#data","text":"Data is a data template Properties Name Type Go type Required Default Description Example source DataSource DataSource transformation Transformation Transformation","title":" Data"},{"location":"executor_swagger/#datasource","text":"DataSource sources external data into a data template Properties Name Type Go type Required Default Description Example artifactPaths ArtifactPaths ArtifactPaths","title":" DataSource"},{"location":"executor_swagger/#downwardapiprojection","text":"Note that this is identical to a downwardAPI volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of DownwardAPIVolume file +optional","title":" DownwardAPIProjection"},{"location":"executor_swagger/#downwardapivolumefile","text":"DownwardAPIVolumeFile represents information to create the file containing the pod field Properties Name Type Go type Required Default Description Example fieldRef ObjectFieldSelector ObjectFieldSelector mode int32 (formatted integer) int32 Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector ResourceFieldSelector","title":" DownwardAPIVolumeFile"},{"location":"executor_swagger/#downwardapivolumesource","text":"Downward API volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of downward API volume file +optional","title":" DownwardAPIVolumeSource"},{"location":"executor_swagger/#duration","text":"Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. interface{}","title":" Duration"},{"location":"executor_swagger/#emptydirvolumesource","text":"Empty directory volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example medium StorageMedium StorageMedium sizeLimit Quantity Quantity","title":" EmptyDirVolumeSource"},{"location":"executor_swagger/#envfromsource","text":"EnvFromSource represents the source of a set of ConfigMaps Properties Name Type Go type Required Default Description Example configMapRef ConfigMapEnvSource ConfigMapEnvSource prefix string string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. +optional secretRef SecretEnvSource SecretEnvSource","title":" EnvFromSource"},{"location":"executor_swagger/#envvar","text":"Properties Name Type Go type Required Default Description Example name string string Name of the environment variable. Must be a C_IDENTIFIER. value string string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". +optional valueFrom EnvVarSource EnvVarSource","title":" EnvVar"},{"location":"executor_swagger/#envvarsource","text":"Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector fieldRef ObjectFieldSelector ObjectFieldSelector resourceFieldRef ResourceFieldSelector ResourceFieldSelector secretKeyRef SecretKeySelector SecretKeySelector","title":" EnvVarSource"},{"location":"executor_swagger/#ephemeralvolumesource","text":"Properties Name Type Go type Required Default Description Example volumeClaimTemplate PersistentVolumeClaimTemplate PersistentVolumeClaimTemplate","title":" EphemeralVolumeSource"},{"location":"executor_swagger/#execaction","text":"Properties Name Type Go type Required Default Description Example command []string []string Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions (' ', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. +optional","title":" ExecAction"},{"location":"executor_swagger/#executetemplateargs","text":"Properties Name Type Go type Required Default Description Example template Template Template \u2713 workflow Workflow Workflow \u2713","title":" ExecuteTemplateArgs"},{"location":"executor_swagger/#executetemplatereply","text":"Properties Name Type Go type Required Default Description Example node NodeResult NodeResult requeue Duration Duration","title":" ExecuteTemplateReply"},{"location":"executor_swagger/#executorconfig","text":"Properties Name Type Go type Required Default Description Example serviceAccountName string string ServiceAccountName specifies the service account name of the executor container.","title":" ExecutorConfig"},{"location":"executor_swagger/#fcvolumesource","text":"Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine +optional lun int32 (formatted integer) int32 lun is Optional: FC target lun number +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional targetWWNs []string []string targetWWNs is Optional: FC target worldwide names (WWNs) +optional wwids []string []string wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. +optional","title":" FCVolumeSource"},{"location":"executor_swagger/#fieldsv1","text":"Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff +protobuf.options.(gogoproto.goproto_stringer)=false interface{}","title":" FieldsV1"},{"location":"executor_swagger/#flexvolumesource","text":"FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Properties Name Type Go type Required Default Description Example driver string string driver is the name of the driver to use for this volume. fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. +optional options map of string map[string]string options is Optional: this field holds extra command options if any. +optional readOnly boolean bool readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference","title":" FlexVolumeSource"},{"location":"executor_swagger/#flockervolumesource","text":"One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example datasetName string string datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated +optional datasetUUID string string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset +optional","title":" FlockerVolumeSource"},{"location":"executor_swagger/#gcepersistentdiskvolumesource","text":"A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional pdName string string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional","title":" GCEPersistentDiskVolumeSource"},{"location":"executor_swagger/#gcsartifact","text":"GCSArtifact is the location of a GCS artifact Properties Name Type Go type Required Default Description Example bucket string string Bucket is the name of the bucket key string string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector SecretKeySelector","title":" GCSArtifact"},{"location":"executor_swagger/#grpcaction","text":"Properties Name Type Go type Required Default Description Example port int32 (formatted integer) int32 Port number of the gRPC service. Number must be in the range 1 to 65535. service string string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC. +optional +default=\"\" | |","title":" GRPCAction"},{"location":"executor_swagger/#gauge","text":"Gauge is a Gauge prometheus metric Properties Name Type Go type Required Default Description Example operation GaugeOperation GaugeOperation realtime boolean bool Realtime emits this metric in real time if applicable value string string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric","title":" Gauge"},{"location":"executor_swagger/#gaugeoperation","text":"Name Type Go type Default Description Example GaugeOperation string string","title":" GaugeOperation"},{"location":"executor_swagger/#gitartifact","text":"GitArtifact is the location of an git artifact Properties Name Type Go type Required Default Description Example branch string string Branch is the branch to fetch when SingleBranch is enabled depth uint64 (formatted integer) uint64 Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean bool DisableSubmodules disables submodules during git clone fetch []string []string Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean bool InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector SecretKeySelector repo string string Repo is the git repository revision string string Revision is the git commit, tag, branch to checkout singleBranch boolean bool SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector","title":" GitArtifact"},{"location":"executor_swagger/#gitrepovolumesource","text":"DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Properties Name Type Go type Required Default Description Example directory string string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. +optional repository string string repository is the URL revision string string revision is the commit hash for the specified revision. +optional","title":" GitRepoVolumeSource"},{"location":"executor_swagger/#glusterfsvolumesource","text":"Glusterfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example endpoints string string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean bool readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod +optional","title":" GlusterfsVolumeSource"},{"location":"executor_swagger/#hdfsartifact","text":"HDFSArtifact is the location of an HDFS artifact Properties Name Type Go type Required Default Description Example addresses []string []string Addresses is accessible addresses of HDFS name nodes force boolean bool Force copies a file forcibly even if it exists hdfsUser string string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector SecretKeySelector krbConfigConfigMap ConfigMapKeySelector ConfigMapKeySelector krbKeytabSecret SecretKeySelector SecretKeySelector krbRealm string string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string string Path is a file path in HDFS","title":" HDFSArtifact"},{"location":"executor_swagger/#http","text":"Properties Name Type Go type Required Default Description Example body string string Body is content of the HTTP Request bodyFrom HTTPBodySource HTTPBodySource headers HTTPHeaders HTTPHeaders insecureSkipVerify boolean bool InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string string Method is HTTP methods for HTTP Request successCondition string string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds int64 (formatted integer) int64 TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string string URL of the HTTP Request","title":" HTTP"},{"location":"executor_swagger/#httpartifact","text":"HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Properties Name Type Go type Required Default Description Example auth HTTPAuth HTTPAuth headers [] Header []*Header Headers are an optional list of headers to send with HTTP requests for artifacts url string string URL of the artifact","title":" HTTPArtifact"},{"location":"executor_swagger/#httpauth","text":"Properties Name Type Go type Required Default Description Example basicAuth BasicAuth BasicAuth clientCert ClientCertAuth ClientCertAuth oauth2 OAuth2Auth OAuth2Auth","title":" HTTPAuth"},{"location":"executor_swagger/#httpbodysource","text":"Properties Name Type Go type Required Default Description Example bytes []uint8 (formatted integer) []uint8","title":" HTTPBodySource"},{"location":"executor_swagger/#httpgetaction","text":"Properties Name Type Go type Required Default Description Example host string string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. +optional httpHeaders [] HTTPHeader []*HTTPHeader Custom headers to set in the request. HTTP allows repeated headers. +optional path string string Path to access on the HTTP server. +optional port IntOrString IntOrString scheme URIScheme URIScheme","title":" HTTPGetAction"},{"location":"executor_swagger/#httpheader","text":"Properties Name Type Go type Required Default Description Example name string string value string string valueFrom HTTPHeaderSource HTTPHeaderSource","title":" HTTPHeader"},{"location":"executor_swagger/#httpheadersource","text":"Properties Name Type Go type Required Default Description Example secretKeyRef SecretKeySelector SecretKeySelector","title":" HTTPHeaderSource"},{"location":"executor_swagger/#httpheaders","text":"[] HTTPHeader","title":" HTTPHeaders"},{"location":"executor_swagger/#header","text":"Header indicate a key-value request header to be used when fetching artifacts over HTTP Properties Name Type Go type Required Default Description Example name string string Name is the header name value string string Value is the literal value to use for the header","title":" Header"},{"location":"executor_swagger/#histogram","text":"Histogram is a Histogram prometheus metric Properties Name Type Go type Required Default Description Example buckets [] Amount []Amount Buckets is a list of bucket divisors for the histogram value string string Value is the value of the metric","title":" Histogram"},{"location":"executor_swagger/#hostalias","text":"HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Properties Name Type Go type Required Default Description Example hostnames []string []string Hostnames for the above IP address. ip string string IP address of the host file entry.","title":" HostAlias"},{"location":"executor_swagger/#hostpathtype","text":"+enum Name Type Go type Default Description Example HostPathType string string +enum","title":" HostPathType"},{"location":"executor_swagger/#hostpathvolumesource","text":"Host path volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type HostPathType HostPathType","title":" HostPathVolumeSource"},{"location":"executor_swagger/#iscsivolumesource","text":"ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example chapAuthDiscovery boolean bool chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication +optional chapAuthSession boolean bool chapAuthSession defines whether support iSCSI Session CHAP authentication +optional fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine +optional initiatorName string string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. +optional iqn string string iqn is the target iSCSI Qualified Name. iscsiInterface string string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). +optional lun int32 (formatted integer) int32 lun represents iSCSI Target Lun number. portals []string []string portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. +optional secretRef LocalObjectReference LocalObjectReference targetPortal string string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).","title":" ISCSIVolumeSource"},{"location":"executor_swagger/#inputs","text":"Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters are a list of parameters passed as inputs +patchStrategy=merge +patchMergeKey=name","title":" Inputs"},{"location":"executor_swagger/#intorstring","text":"+protobuf=true +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:openapi-gen=true Properties Name Type Go type Required Default Description Example IntVal int32 (formatted integer) int32 StrVal string string Type Type Type","title":" IntOrString"},{"location":"executor_swagger/#item","text":"+protobuf.options.(gogoproto.goproto_stringer)=false +kubebuilder:validation:Type=object interface{}","title":" Item"},{"location":"executor_swagger/#keytopath","text":"Properties Name Type Go type Required Default Description Example key string string key is the key to project. mode int32 (formatted integer) int32 mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.","title":" KeyToPath"},{"location":"executor_swagger/#labelselector","text":"A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] LabelSelectorRequirement []*LabelSelectorRequirement matchExpressions is a list of label selector requirements. The requirements are ANDed. +optional matchLabels map of string map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed. +optional","title":" LabelSelector"},{"location":"executor_swagger/#labelselectoroperator","text":"Name Type Go type Default Description Example LabelSelectorOperator string string","title":" LabelSelectorOperator"},{"location":"executor_swagger/#labelselectorrequirement","text":"A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string key is the label key that the selector applies to. +patchMergeKey=key +patchStrategy=merge operator LabelSelectorOperator LabelSelectorOperator values []string []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +optional","title":" LabelSelectorRequirement"},{"location":"executor_swagger/#lifecycle","text":"Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Properties Name Type Go type Required Default Description Example postStart LifecycleHandler LifecycleHandler preStop LifecycleHandler LifecycleHandler","title":" Lifecycle"},{"location":"executor_swagger/#lifecyclehandler","text":"LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction httpGet HTTPGetAction HTTPGetAction tcpSocket TCPSocketAction TCPSocketAction","title":" LifecycleHandler"},{"location":"executor_swagger/#lifecyclehook","text":"Properties Name Type Go type Required Default Description Example arguments Arguments Arguments expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef","title":" LifecycleHook"},{"location":"executor_swagger/#lifecyclehooks","text":"LifecycleHooks","title":" LifecycleHooks"},{"location":"executor_swagger/#localobjectreference","text":"LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional","title":" LocalObjectReference"},{"location":"executor_swagger/#managedfieldsentry","text":"ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Properties Name Type Go type Required Default Description Example apiVersion string string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 manager string string Manager is an identifier of the workflow managing these fields. operation ManagedFieldsOperationType ManagedFieldsOperationType subresource string string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time","title":" ManagedFieldsEntry"},{"location":"executor_swagger/#managedfieldsoperationtype","text":"Name Type Go type Default Description Example ManagedFieldsOperationType string string","title":" ManagedFieldsOperationType"},{"location":"executor_swagger/#manifestfrom","text":"Properties Name Type Go type Required Default Description Example artifact Artifact Artifact","title":" ManifestFrom"},{"location":"executor_swagger/#memoize","text":"Memoization enables caching for the Outputs of the template Properties Name Type Go type Required Default Description Example cache Cache Cache key string string Key is the key to use as the caching key maxAge string string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored.","title":" Memoize"},{"location":"executor_swagger/#metadata","text":"Pod metdata Properties Name Type Go type Required Default Description Example annotations map of string map[string]string labels map of string map[string]string","title":" Metadata"},{"location":"executor_swagger/#metriclabel","text":"MetricLabel is a single label for a prometheus metric Properties Name Type Go type Required Default Description Example key string string value string string","title":" MetricLabel"},{"location":"executor_swagger/#metrics","text":"Metrics are a list of metrics emitted from a Workflow/Template Properties Name Type Go type Required Default Description Example prometheus [] Prometheus []*Prometheus Prometheus is a list of prometheus metrics to be emitted","title":" Metrics"},{"location":"executor_swagger/#mountpropagationmode","text":"+enum Name Type Go type Default Description Example MountPropagationMode string string +enum","title":" MountPropagationMode"},{"location":"executor_swagger/#mutex","text":"Mutex holds Mutex configuration Properties Name Type Go type Required Default Description Example name string string name of the mutex namespace string string \"[namespace of workflow]\"","title":" Mutex"},{"location":"executor_swagger/#nfsvolumesource","text":"NFS volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean bool readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs +optional server string string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs","title":" NFSVolumeSource"},{"location":"executor_swagger/#nodeaffinity","text":"Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] PreferredSchedulingTerm []*PreferredSchedulingTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution NodeSelector NodeSelector","title":" NodeAffinity"},{"location":"executor_swagger/#nodephase","text":"Name Type Go type Default Description Example NodePhase string string","title":" NodePhase"},{"location":"executor_swagger/#noderesult","text":"Properties Name Type Go type Required Default Description Example message string string outputs Outputs Outputs phase NodePhase NodePhase progress Progress Progress","title":" NodeResult"},{"location":"executor_swagger/#nodeselector","text":"A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. +structType=atomic Properties Name Type Go type Required Default Description Example nodeSelectorTerms [] NodeSelectorTerm []*NodeSelectorTerm Required. A list of node selector terms. The terms are ORed.","title":" NodeSelector"},{"location":"executor_swagger/#nodeselectoroperator","text":"A node selector operator is the set of operators that can be used in a node selector requirement. +enum Name Type Go type Default Description Example NodeSelectorOperator string string A node selector operator is the set of operators that can be used in a node selector requirement. +enum","title":" NodeSelectorOperator"},{"location":"executor_swagger/#nodeselectorrequirement","text":"A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string The label key that the selector applies to. operator NodeSelectorOperator NodeSelectorOperator values []string []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +optional","title":" NodeSelectorRequirement"},{"location":"executor_swagger/#nodeselectorterm","text":"A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's labels. +optional matchFields [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's fields. +optional","title":" NodeSelectorTerm"},{"location":"executor_swagger/#nonestrategy","text":"NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. interface{}","title":" NoneStrategy"},{"location":"executor_swagger/#oauth2auth","text":"OAuth2Auth holds all information for client authentication via OAuth2 tokens Properties Name Type Go type Required Default Description Example clientIDSecret SecretKeySelector SecretKeySelector clientSecretSecret SecretKeySelector SecretKeySelector endpointParams [] OAuth2EndpointParam []*OAuth2EndpointParam scopes []string []string tokenURLSecret SecretKeySelector SecretKeySelector","title":" OAuth2Auth"},{"location":"executor_swagger/#oauth2endpointparam","text":"EndpointParam is for requesting optional fields that should be sent in the oauth request Properties Name Type Go type Required Default Description Example key string string Name is the header name value string string Value is the literal value to use for the header","title":" OAuth2EndpointParam"},{"location":"executor_swagger/#ossartifact","text":"OSSArtifact is the location of an Alibaba Cloud OSS artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket createBucketIfNotPresent boolean bool CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string string Endpoint is the hostname of the bucket endpoint key string string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule OSSLifecycleRule secretKeySecret SecretKeySelector SecretKeySelector securityToken string string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":" OSSArtifact"},{"location":"executor_swagger/#osslifecyclerule","text":"OSSLifecycleRule specifies how to manage bucket's lifecycle Properties Name Type Go type Required Default Description Example markDeletionAfterDays int32 (formatted integer) int32 MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays int32 (formatted integer) int32 MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type","title":" OSSLifecycleRule"},{"location":"executor_swagger/#objectfieldselector","text":"+structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". +optional fieldPath string string Path of the field to select in the specified API version.","title":" ObjectFieldSelector"},{"location":"executor_swagger/#objectmeta","text":"Properties Name Type Go type Required Default Description Example name string string namespace string string uid string string","title":" ObjectMeta"},{"location":"executor_swagger/#outputs","text":"Outputs hold parameters, artifacts, and results from a step Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts exitCode string string ExitCode holds the exit code of a script template parameters [] Parameter []*Parameter Parameters holds the list of output parameters produced by a step +patchStrategy=merge +patchMergeKey=name result string string Result holds the result (stdout) of a script template","title":" Outputs"},{"location":"executor_swagger/#ownerreference","text":"OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. +structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string API version of the referent. blockOwnerDeletion boolean bool If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. +optional controller boolean bool If true, this reference points to the managing controller. +optional kind string string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid UID UID","title":" OwnerReference"},{"location":"executor_swagger/#parallelsteps","text":"+kubebuilder:validation:Type=array interface{}","title":" ParallelSteps"},{"location":"executor_swagger/#parameter","text":"Parameter indicate a passed string parameter to a service template with an optional default value Properties Name Type Go type Required Default Description Example default AnyString AnyString description AnyString AnyString enum [] AnyString []AnyString Enum holds a list of string values to choose from, for the actual value of the parameter globalName string string GlobalName exports an output parameter to the global scope, making it available as '{{workflow.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string string Name is the parameter name value AnyString AnyString valueFrom ValueFrom ValueFrom","title":" Parameter"},{"location":"executor_swagger/#persistentvolumeaccessmode","text":"+enum Name Type Go type Default Description Example PersistentVolumeAccessMode string string +enum","title":" PersistentVolumeAccessMode"},{"location":"executor_swagger/#persistentvolumeclaimspec","text":"PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Properties Name Type Go type Required Default Description Example accessModes [] PersistentVolumeAccessMode []PersistentVolumeAccessMode accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 +optional dataSource TypedLocalObjectReference TypedLocalObjectReference dataSourceRef TypedLocalObjectReference TypedLocalObjectReference resources ResourceRequirements ResourceRequirements selector LabelSelector LabelSelector storageClassName string string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 +optional volumeMode PersistentVolumeMode PersistentVolumeMode volumeName string string volumeName is the binding reference to the PersistentVolume backing this claim. +optional","title":" PersistentVolumeClaimSpec"},{"location":"executor_swagger/#persistentvolumeclaimtemplate","text":"PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Properties Name Type Go type Required Default Description Example annotations map of string map[string]string Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations +optional clusterName string string Deprecated: ClusterName is a legacy field that was always cleared by the system and never used; it will be removed completely in 1.25. The name in the go struct is changed to help clients detect accidental use. +optional | | | creationTimestamp | Time | Time | | | | | | deletionGracePeriodSeconds | int64 (formatted integer)| int64 | | | Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. +optional | | | deletionTimestamp | Time | Time | | | | | | finalizers | []string| []string | | | Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. +optional +patchStrategy=merge | | | generateName | string| string | | | GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency +optional | | | generation | int64 (formatted integer)| int64 | | | A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. +optional | | | labels | map of string| map[string]string | | | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels +optional | | | managedFields | [] ManagedFieldsEntry | []*ManagedFieldsEntry | | | ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. +optional | | | name | string| string | | | Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names +optional | | | namespace | string| string | | | Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces +optional | | | ownerReferences | [] OwnerReference | []*OwnerReference | | | List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. +optional +patchMergeKey=uid +patchStrategy=merge | | | resourceVersion | string| string | | | An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency +optional | | | selfLink | string| string | | | Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. +optional | | | spec | PersistentVolumeClaimSpec | PersistentVolumeClaimSpec | | | | | | uid | UID | UID | | | | |","title":" PersistentVolumeClaimTemplate"},{"location":"executor_swagger/#persistentvolumeclaimvolumesource","text":"This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Properties Name Type Go type Required Default Description Example claimName string string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean bool readOnly Will force the ReadOnly setting in VolumeMounts. Default false. +optional","title":" PersistentVolumeClaimVolumeSource"},{"location":"executor_swagger/#persistentvolumemode","text":"+enum Name Type Go type Default Description Example PersistentVolumeMode string string +enum","title":" PersistentVolumeMode"},{"location":"executor_swagger/#photonpersistentdiskvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string string pdID is the ID that identifies Photon Controller persistent disk","title":" PhotonPersistentDiskVolumeSource"},{"location":"executor_swagger/#plugin","text":"Plugin is an Object with exactly one key interface{}","title":" Plugin"},{"location":"executor_swagger/#podaffinity","text":"Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional","title":" PodAffinity"},{"location":"executor_swagger/#podaffinityterm","text":"Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running Properties Name Type Go type Required Default Description Example labelSelector LabelSelector LabelSelector namespaceSelector LabelSelector LabelSelector namespaces []string []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\". +optional topologyKey string string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.","title":" PodAffinityTerm"},{"location":"executor_swagger/#podantiaffinity","text":"Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional","title":" PodAntiAffinity"},{"location":"executor_swagger/#podfsgroupchangepolicy","text":"PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum Name Type Go type Default Description Example PodFSGroupChangePolicy string string PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum","title":" PodFSGroupChangePolicy"},{"location":"executor_swagger/#podsecuritycontext","text":"Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Properties Name Type Go type Required Default Description Example fsGroup int64 (formatted integer) int64 A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: The owning GID will be the FSGroup The setgid bit is set (new files created in the volume will be owned by FSGroup) The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. +optional | | | fsGroupChangePolicy | PodFSGroupChangePolicy | PodFSGroupChangePolicy | | | | | | runAsGroup | int64 (formatted integer)| int64 | | | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | runAsNonRoot | boolean| bool | | | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional | | | runAsUser | int64 (formatted integer)| int64 | | | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | seLinuxOptions | SELinuxOptions | SELinuxOptions | | | | | | seccompProfile | SeccompProfile | SeccompProfile | | | | | | supplementalGroups | []int64 (formatted integer)| []int64 | | | A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. +optional | | | sysctls | [] Sysctl | []*Sysctl | | | Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. +optional | | | windowsOptions | WindowsSecurityContextOptions | WindowsSecurityContextOptions | | | | |","title":" PodSecurityContext"},{"location":"executor_swagger/#portworxvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional volumeID string string volumeID uniquely identifies a Portworx volume","title":" PortworxVolumeSource"},{"location":"executor_swagger/#preferredschedulingterm","text":"An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Properties Name Type Go type Required Default Description Example preference NodeSelectorTerm NodeSelectorTerm weight int32 (formatted integer) int32 Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.","title":" PreferredSchedulingTerm"},{"location":"executor_swagger/#probe","text":"Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction failureThreshold int32 (formatted integer) int32 Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. +optional grpc GRPCAction GRPCAction httpGet HTTPGetAction HTTPGetAction initialDelaySeconds int32 (formatted integer) int32 Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional periodSeconds int32 (formatted integer) int32 How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. +optional successThreshold int32 (formatted integer) int32 Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. +optional tcpSocket TCPSocketAction TCPSocketAction terminationGracePeriodSeconds int64 (formatted integer) int64 Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. +optional timeoutSeconds int32 (formatted integer) int32 Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional","title":" Probe"},{"location":"executor_swagger/#procmounttype","text":"+enum Name Type Go type Default Description Example ProcMountType string string +enum","title":" ProcMountType"},{"location":"executor_swagger/#progress","text":"Name Type Go type Default Description Example Progress string string","title":" Progress"},{"location":"executor_swagger/#projectedvolumesource","text":"Represents a projected volume source Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional sources [] VolumeProjection []*VolumeProjection sources is the list of volume projections +optional","title":" ProjectedVolumeSource"},{"location":"executor_swagger/#prometheus","text":"Prometheus is a prometheus metric to be emitted Properties Name Type Go type Required Default Description Example counter Counter Counter gauge Gauge Gauge help string string Help is a string that describes the metric histogram Histogram Histogram labels [] MetricLabel []*MetricLabel Labels is a list of metric labels name string string Name is the name of the metric when string string When is a conditional statement that decides when to emit the metric","title":" Prometheus"},{"location":"executor_swagger/#protocol","text":"+enum Name Type Go type Default Description Example Protocol string string +enum","title":" Protocol"},{"location":"executor_swagger/#pullpolicy","text":"PullPolicy describes a policy for if/when to pull a container image +enum Name Type Go type Default Description Example PullPolicy string string PullPolicy describes a policy for if/when to pull a container image +enum","title":" PullPolicy"},{"location":"executor_swagger/#quantity","text":"The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. +protobuf=true +protobuf.embed=string +protobuf.options.marshal=false +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:deepcopy-gen=true +k8s:openapi-gen=true interface{}","title":" Quantity"},{"location":"executor_swagger/#quobytevolumesource","text":"Quobyte volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example group string string group to map volume access to Default is no group +optional readOnly boolean bool readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. +optional registry string string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin +optional user string string user to map volume access to Defaults to serivceaccount user +optional volume string string volume is a string that references an already created Quobyte volume by name.","title":" QuobyteVolumeSource"},{"location":"executor_swagger/#rbdvolumesource","text":"RBD volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine +optional image string string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional monitors []string []string monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional","title":" RBDVolumeSource"},{"location":"executor_swagger/#rawartifact","text":"RawArtifact allows raw string content to be placed as an artifact in a container Properties Name Type Go type Required Default Description Example data string string Data is the string contents of the artifact","title":" RawArtifact"},{"location":"executor_swagger/#resourcefieldselector","text":"ResourceFieldSelector represents container resources (cpu, memory) and their output format +structType=atomic Properties Name Type Go type Required Default Description Example containerName string string Container name: required for volumes, optional for env vars +optional divisor Quantity Quantity resource string string Required: resource to select","title":" ResourceFieldSelector"},{"location":"executor_swagger/#resourcelist","text":"ResourceList","title":" ResourceList"},{"location":"executor_swagger/#resourcerequirements","text":"Properties Name Type Go type Required Default Description Example limits ResourceList ResourceList requests ResourceList ResourceList","title":" ResourceRequirements"},{"location":"executor_swagger/#resourcetemplate","text":"ResourceTemplate is a template subtype to manipulate kubernetes resources Properties Name Type Go type Required Default Description Example action string string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags []string []string Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom mergeStrategy string string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean bool SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step","title":" ResourceTemplate"},{"location":"executor_swagger/#retryaffinity","text":"Properties Name Type Go type Required Default Description Example nodeAntiAffinity RetryNodeAntiAffinity RetryNodeAntiAffinity","title":" RetryAffinity"},{"location":"executor_swagger/#retrynodeantiaffinity","text":"In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\". interface{}","title":" RetryNodeAntiAffinity"},{"location":"executor_swagger/#retrypolicy","text":"Name Type Go type Default Description Example RetryPolicy string string","title":" RetryPolicy"},{"location":"executor_swagger/#retrystrategy","text":"RetryStrategy provides controls on how to retry a workflow step Properties Name Type Go type Required Default Description Example affinity RetryAffinity RetryAffinity backoff Backoff Backoff expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString IntOrString retryPolicy RetryPolicy RetryPolicy","title":" RetryStrategy"},{"location":"executor_swagger/#s3artifact","text":"S3Artifact is the location of an S3 artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket caSecret SecretKeySelector SecretKeySelector createBucketIfNotPresent CreateS3BucketOptions CreateS3BucketOptions encryptionOptions S3EncryptionOptions S3EncryptionOptions endpoint string string Endpoint is the hostname of the bucket endpoint insecure boolean bool Insecure will connect to the service with TLS key string string Key is the key in the bucket where the artifact resides region string string Region contains the optional bucket region roleARN string string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySelector useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":" S3Artifact"},{"location":"executor_swagger/#s3encryptionoptions","text":"S3EncryptionOptions used to determine encryption options during s3 operations Properties Name Type Go type Required Default Description Example enableEncryption boolean bool EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector SecretKeySelector","title":" S3EncryptionOptions"},{"location":"executor_swagger/#selinuxoptions","text":"SELinuxOptions are the labels to be applied to the container Properties Name Type Go type Required Default Description Example level string string Level is SELinux level label that applies to the container. +optional role string string Role is a SELinux role label that applies to the container. +optional type string string Type is a SELinux type label that applies to the container. +optional user string string User is a SELinux user label that applies to the container. +optional","title":" SELinuxOptions"},{"location":"executor_swagger/#scaleiovolumesource","text":"ScaleIOVolumeSource represents a persistent ScaleIO volume Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". +optional gateway string string gateway is the host address of the ScaleIO API Gateway. protectionDomain string string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. +optional readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference sslEnabled boolean bool sslEnabled Flag enable/disable SSL communication with Gateway, default false +optional storageMode string string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. +optional storagePool string string storagePool is the ScaleIO Storage Pool associated with the protection domain. +optional system string string system is the name of the storage system as configured in ScaleIO. volumeName string string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.","title":" ScaleIOVolumeSource"},{"location":"executor_swagger/#scripttemplate","text":"ScriptTemplate is a template subtype to enable scripting through code steps Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext source string string Source contains the source code of the script to execute startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" ScriptTemplate"},{"location":"executor_swagger/#seccompprofile","text":"Only one profile source may be set. +union Properties Name Type Go type Required Default Description Example localhostProfile string string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". +optional type SeccompProfileType SeccompProfileType","title":" SeccompProfile"},{"location":"executor_swagger/#seccompprofiletype","text":"+enum Name Type Go type Default Description Example SeccompProfileType string string +enum","title":" SeccompProfileType"},{"location":"executor_swagger/#secretenvsource","text":"The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret must be defined +optional","title":" SecretEnvSource"},{"location":"executor_swagger/#secretkeyselector","text":"+structType=atomic Properties Name Type Go type Required Default Description Example key string string The key of the secret to select from. Must be a valid secret key. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret or its key must be defined +optional","title":" SecretKeySelector"},{"location":"executor_swagger/#secretprojection","text":"The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional field specify whether the Secret or its key must be defined +optional","title":" SecretProjection"},{"location":"executor_swagger/#secretvolumesource","text":"The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional optional boolean bool optional field specify whether the Secret or its keys must be defined +optional secretName string string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret +optional","title":" SecretVolumeSource"},{"location":"executor_swagger/#securitycontext","text":"Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Properties Name Type Go type Required Default Description Example allowPrivilegeEscalation boolean bool AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. +optional capabilities Capabilities Capabilities privileged boolean bool Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. +optional procMount ProcMountType ProcMountType readOnlyRootFilesystem boolean bool Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. +optional runAsGroup int64 (formatted integer) int64 The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional runAsNonRoot boolean bool Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional runAsUser int64 (formatted integer) int64 The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional seLinuxOptions SELinuxOptions SELinuxOptions seccompProfile SeccompProfile SeccompProfile windowsOptions WindowsSecurityContextOptions WindowsSecurityContextOptions","title":" SecurityContext"},{"location":"executor_swagger/#semaphoreref","text":"SemaphoreRef is a reference of Semaphore Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector namespace string string \"[namespace of workflow]\"","title":" SemaphoreRef"},{"location":"executor_swagger/#sequence","text":"Sequence expands a workflow step into numeric range Properties Name Type Go type Required Default Description Example count IntOrString IntOrString end IntOrString IntOrString format string string Format is a printf format string to format the value in the sequence start IntOrString IntOrString","title":" Sequence"},{"location":"executor_swagger/#serviceaccounttokenprojection","text":"ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Properties Name Type Go type Required Default Description Example audience string string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. +optional expirationSeconds int64 (formatted integer) int64 expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. +optional path string string path is the path relative to the mount point of the file to project the token into.","title":" ServiceAccountTokenProjection"},{"location":"executor_swagger/#storagemedium","text":"Name Type Go type Default Description Example StorageMedium string string","title":" StorageMedium"},{"location":"executor_swagger/#storageosvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference volumeName string string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. +optional","title":" StorageOSVolumeSource"},{"location":"executor_swagger/#suppliedvaluefrom","text":"interface{}","title":" SuppliedValueFrom"},{"location":"executor_swagger/#suspendtemplate","text":"SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Properties Name Type Go type Required Default Description Example duration string string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\"","title":" SuspendTemplate"},{"location":"executor_swagger/#synchronization","text":"Synchronization holds synchronization lock configuration Properties Name Type Go type Required Default Description Example mutex Mutex Mutex semaphore SemaphoreRef SemaphoreRef","title":" Synchronization"},{"location":"executor_swagger/#sysctl","text":"Sysctl defines a kernel parameter to be set Properties Name Type Go type Required Default Description Example name string string Name of a property to set value string string Value of a property to set","title":" Sysctl"},{"location":"executor_swagger/#tcpsocketaction","text":"TCPSocketAction describes an action based on opening a socket Properties Name Type Go type Required Default Description Example host string string Optional: Host name to connect to, defaults to the pod IP. +optional port IntOrString IntOrString","title":" TCPSocketAction"},{"location":"executor_swagger/#tainteffect","text":"+enum Name Type Go type Default Description Example TaintEffect string string +enum","title":" TaintEffect"},{"location":"executor_swagger/#tarstrategy","text":"TarStrategy will tar and gzip the file or directory when saving Properties Name Type Go type Required Default Description Example compressionLevel int32 (formatted integer) int32 CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression.","title":" TarStrategy"},{"location":"executor_swagger/#template","text":"Template is a reusable and composable unit of execution in a workflow Properties Name Type Go type Required Default Description Example activeDeadlineSeconds IntOrString IntOrString affinity Affinity Affinity archiveLocation ArtifactLocation ArtifactLocation automountServiceAccountToken boolean bool AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container containerSet ContainerSetTemplate ContainerSetTemplate daemon boolean bool Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAGTemplate data Data Data executor ExecutorConfig ExecutorConfig failFast boolean bool FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases [] HostAlias []*HostAlias HostAliases is an optional list of hosts and IPs that will be injected into the pod spec +patchStrategy=merge +patchMergeKey=ip http HTTP HTTP initContainers [] UserContainer []*UserContainer InitContainers is a list of containers which run before the main container. +patchStrategy=merge +patchMergeKey=name inputs Inputs Inputs memoize Memoize Memoize metadata Metadata Metadata metrics Metrics Metrics name string string Name is the name of the template nodeSelector map of string map[string]string NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs parallelism int64 (formatted integer) int64 Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin podSpecPatch string string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority int32 (formatted integer) int32 Priority to apply to workflow pods. priorityClassName string string PriorityClassName to apply to workflow pods. resource ResourceTemplate ResourceTemplate retryStrategy RetryStrategy RetryStrategy schedulerName string string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. +optional script ScriptTemplate ScriptTemplate securityContext PodSecurityContext PodSecurityContext serviceAccountName string string ServiceAccountName to apply to workflow pods sidecars [] UserContainer []*UserContainer Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes +patchStrategy=merge +patchMergeKey=name steps [] ParallelSteps []ParallelSteps Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate SuspendTemplate synchronization Synchronization Synchronization timeout string string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations [] Toleration []*Toleration Tolerations to apply to workflow pods. +patchStrategy=merge +patchMergeKey=key volumes [] Volume []*Volume Volumes is a list of volumes that can be mounted by containers in a template. +patchStrategy=merge +patchMergeKey=name","title":" Template"},{"location":"executor_swagger/#templateref","text":"Properties Name Type Go type Required Default Description Example clusterScope boolean bool ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string string Name is the resource name of the template. template string string Template is the name of referred template in the resource.","title":" TemplateRef"},{"location":"executor_swagger/#terminationmessagepolicy","text":"+enum Name Type Go type Default Description Example TerminationMessagePolicy string string +enum","title":" TerminationMessagePolicy"},{"location":"executor_swagger/#time","text":"+protobuf.options.marshal=false +protobuf.as=Timestamp +protobuf.options.(gogoproto.goproto_stringer)=false interface{}","title":" Time"},{"location":"executor_swagger/#toleration","text":"The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . Properties Name Type Go type Required Default Description Example effect TaintEffect TaintEffect key string string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. +optional operator TolerationOperator TolerationOperator tolerationSeconds int64 (formatted integer) int64 TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. +optional value string string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. +optional","title":" Toleration"},{"location":"executor_swagger/#tolerationoperator","text":"+enum Name Type Go type Default Description Example TolerationOperator string string +enum","title":" TolerationOperator"},{"location":"executor_swagger/#transformation","text":"[] TransformationStep","title":" Transformation"},{"location":"executor_swagger/#transformationstep","text":"Properties Name Type Go type Required Default Description Example expression string string Expression defines an expr expression to apply","title":" TransformationStep"},{"location":"executor_swagger/#type","text":"Name Type Go type Default Description Example Type int64 (formatted integer) int64","title":" Type"},{"location":"executor_swagger/#typedlocalobjectreference","text":"TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example apiGroup string string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. +optional kind string string Kind is the type of resource being referenced name string string Name is the name of resource being referenced","title":" TypedLocalObjectReference"},{"location":"executor_swagger/#uid","text":"UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. Name Type Go type Default Description Example UID string string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated.","title":" UID"},{"location":"executor_swagger/#urischeme","text":"URIScheme identifies the scheme used for connection to a host for Get actions +enum Name Type Go type Default Description Example URIScheme string string URIScheme identifies the scheme used for connection to a host for Get actions +enum","title":" URIScheme"},{"location":"executor_swagger/#usercontainer","text":"Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe mirrorVolumeMounts boolean bool MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" UserContainer"},{"location":"executor_swagger/#valuefrom","text":"ValueFrom describes a location in which to obtain the value to a parameter Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector default AnyString AnyString event string string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string string JQFilter expression against the resource object in resource templates jsonPath string string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom SuppliedValueFrom","title":" ValueFrom"},{"location":"executor_swagger/#volume","text":"Properties Name Type Go type Required Default Description Example awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStoreVolumeSource azureDisk AzureDiskVolumeSource AzureDiskVolumeSource azureFile AzureFileVolumeSource AzureFileVolumeSource cephfs CephFSVolumeSource CephFSVolumeSource cinder CinderVolumeSource CinderVolumeSource configMap ConfigMapVolumeSource ConfigMapVolumeSource csi CSIVolumeSource CSIVolumeSource downwardAPI DownwardAPIVolumeSource DownwardAPIVolumeSource emptyDir EmptyDirVolumeSource EmptyDirVolumeSource ephemeral EphemeralVolumeSource EphemeralVolumeSource fc FCVolumeSource FCVolumeSource flexVolume FlexVolumeSource FlexVolumeSource flocker FlockerVolumeSource FlockerVolumeSource gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDiskVolumeSource gitRepo GitRepoVolumeSource GitRepoVolumeSource glusterfs GlusterfsVolumeSource GlusterfsVolumeSource hostPath HostPathVolumeSource HostPathVolumeSource iscsi ISCSIVolumeSource ISCSIVolumeSource name string string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFSVolumeSource persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDiskVolumeSource portworxVolume PortworxVolumeSource PortworxVolumeSource projected ProjectedVolumeSource ProjectedVolumeSource quobyte QuobyteVolumeSource QuobyteVolumeSource rbd RBDVolumeSource RBDVolumeSource scaleIO ScaleIOVolumeSource ScaleIOVolumeSource secret SecretVolumeSource SecretVolumeSource storageos StorageOSVolumeSource StorageOSVolumeSource vsphereVolume VsphereVirtualDiskVolumeSource VsphereVirtualDiskVolumeSource","title":" Volume"},{"location":"executor_swagger/#volumedevice","text":"Properties Name Type Go type Required Default Description Example devicePath string string devicePath is the path inside of the container that the device will be mapped to. name string string name must match the name of a persistentVolumeClaim in the pod","title":" VolumeDevice"},{"location":"executor_swagger/#volumemount","text":"Properties Name Type Go type Required Default Description Example mountPath string string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation MountPropagationMode MountPropagationMode name string string This must match the Name of a Volume. readOnly boolean bool Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. +optional subPath string string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). +optional subPathExpr string string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. +optional","title":" VolumeMount"},{"location":"executor_swagger/#volumeprojection","text":"Projection that may be projected along with other supported volume types Properties Name Type Go type Required Default Description Example configMap ConfigMapProjection ConfigMapProjection downwardAPI DownwardAPIProjection DownwardAPIProjection secret SecretProjection SecretProjection serviceAccountToken ServiceAccountTokenProjection ServiceAccountTokenProjection","title":" VolumeProjection"},{"location":"executor_swagger/#vspherevirtualdiskvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional storagePolicyID string string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. +optional storagePolicyName string string storagePolicyName is the storage Policy Based Management (SPBM) profile name. +optional volumePath string string volumePath is the path that identifies vSphere volume vmdk","title":" VsphereVirtualDiskVolumeSource"},{"location":"executor_swagger/#weightedpodaffinityterm","text":"The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Properties Name Type Go type Required Default Description Example podAffinityTerm PodAffinityTerm PodAffinityTerm weight int32 (formatted integer) int32 weight associated with matching the corresponding podAffinityTerm, in the range 1-100.","title":" WeightedPodAffinityTerm"},{"location":"executor_swagger/#windowssecuritycontextoptions","text":"Properties Name Type Go type Required Default Description Example gmsaCredentialSpec string string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. +optional gmsaCredentialSpecName string string GMSACredentialSpecName is the name of the GMSA credential spec to use. +optional hostProcess boolean bool HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. +optional runAsUserName string string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional","title":" WindowsSecurityContextOptions"},{"location":"executor_swagger/#workflow","text":"Properties Name Type Go type Required Default Description Example metadata ObjectMeta ObjectMeta \u2713","title":" Workflow"},{"location":"executor_swagger/#zipstrategy","text":"ZipStrategy will unzip zipped input artifacts interface{}","title":" ZipStrategy"},{"location":"faq/","text":"FAQ \u00b6 \"token not valid\", \"any bearer token is able to login in the UI or use the API\" \u00b6 You may not have configured Argo Server authentication correctly. If you want SSO, try running with --auth-mode=sso . If you're using --auth-mode=client , make sure you have Bearer in front of the ServiceAccount Secret, as mentioned in Access Token . Learn more about the Argo Server set-up Argo Server return EOF error \u00b6 Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. Try changing your URL to HTTPS, or start Argo Server using --secure=false . My workflow hangs \u00b6 Check your wait container logs: Is there an RBAC error? Learn more about workflow RBAC Return \"unknown (get pods)\" error \u00b6 You're probably getting a permission denied error because your RBAC is not configured. Learn more about workflow RBAC and even more details There is an error about /var/run/docker.sock \u00b6 Try using a different container runtime executor. Learn more about executors","title":"FAQ"},{"location":"faq/#faq","text":"","title":"FAQ"},{"location":"faq/#token-not-valid-any-bearer-token-is-able-to-login-in-the-ui-or-use-the-api","text":"You may not have configured Argo Server authentication correctly. If you want SSO, try running with --auth-mode=sso . If you're using --auth-mode=client , make sure you have Bearer in front of the ServiceAccount Secret, as mentioned in Access Token . Learn more about the Argo Server set-up","title":"\"token not valid\", \"any bearer token is able to login in the UI or use the API\""},{"location":"faq/#argo-server-return-eof-error","text":"Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. Try changing your URL to HTTPS, or start Argo Server using --secure=false .","title":"Argo Server return EOF error"},{"location":"faq/#my-workflow-hangs","text":"Check your wait container logs: Is there an RBAC error? Learn more about workflow RBAC","title":"My workflow hangs"},{"location":"faq/#return-unknown-get-pods-error","text":"You're probably getting a permission denied error because your RBAC is not configured. Learn more about workflow RBAC and even more details","title":"Return \"unknown (get pods)\" error"},{"location":"faq/#there-is-an-error-about-varrundockersock","text":"Try using a different container runtime executor. Learn more about executors","title":"There is an error about /var/run/docker.sock"},{"location":"fields/","text":"Field Reference \u00b6 Workflow \u00b6 Workflow is the definition of a workflow resource Examples (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`daemoned-stateful-set-with-service.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemoned-stateful-set-with-service.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-jobs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-jobs.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-orchestration.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-orchestration.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch-basic.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch-basic.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-resource-log-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-resource-log-selector.yaml) - [`k8s-set-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-set-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resource-delete-with-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-delete-with-flags.yaml) - [`resource-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-flags.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available status WorkflowStatus No description available CronWorkflow \u00b6 CronWorkflow is the definition of a scheduled workflow resource Examples (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec CronWorkflowSpec No description available status CronWorkflowStatus No description available WorkflowTemplate \u00b6 WorkflowTemplate is the definition of a workflow template resource Examples (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available WorkflowSpec \u00b6 WorkflowSpec is the specification of a Workflow. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description activeDeadlineSeconds integer Optional duration in seconds relative to the workflow start time which the workflow is allowed to run before the controller terminates the io.argoproj.workflow.v1alpha1. A value of zero is used to terminate a Running workflow affinity Affinity Affinity sets the scheduling constraints for all pods in the io.argoproj.workflow.v1alpha1. Can be overridden by an affinity specified in the template archiveLogs boolean ArchiveLogs indicates if the container logs should be archived arguments Arguments Arguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{io.argoproj.workflow.v1alpha1.parameters.myparam}} artifactGC WorkflowLevelArtifactGC ArtifactGC describes the strategy to use when deleting artifacts from completed or deleted workflows (applies to all output Artifacts unless Artifact.ArtifactGC is specified, which overrides this) artifactRepositoryRef ArtifactRepositoryRef ArtifactRepositoryRef specifies the configMap name and key containing the artifact repository config. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. dnsConfig PodDNSConfig PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. entrypoint string Entrypoint is a template reference to the starting point of the io.argoproj.workflow.v1alpha1. executor ExecutorConfig Executor holds configurations of executor containers of the io.argoproj.workflow.v1alpha1. hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step hostAliases Array< HostAlias > No description available hostNetwork boolean Host networking requested for this workflow pod. Default to false. imagePullSecrets Array< LocalObjectReference > ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod metrics Metrics Metrics are a list of metrics emitted from this Workflow nodeSelector Map< string , string > NodeSelector is a selector which will result in all pods of the workflow to be scheduled on the selected node(s). This is able to be overridden by a nodeSelector specified in the template. onExit string OnExit is a template reference which is invoked at the end of the workflow, irrespective of the success, failure, or error of the primary io.argoproj.workflow.v1alpha1. parallelism integer Parallelism limits the max total parallel pods that can execute at the same time in a workflow podDisruptionBudget PodDisruptionBudgetSpec PodDisruptionBudget holds the number of concurrent disruptions that you allow for Workflow's Pods. Controller will automatically add the selector with workflow name, if selector is empty. Optional: Defaults to empty. podGC PodGC PodGC describes the strategy to use when deleting completed pods podMetadata Metadata PodMetadata defines additional metadata that should be applied to workflow pods ~~ podPriority ~~ ~~ integer ~~ ~~Priority to apply to workflow pods.~~ DEPRECATED: Use PodPriorityClassName instead. podPriorityClassName string PriorityClassName to apply to workflow pods. podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first. retryStrategy RetryStrategy RetryStrategy for all templates in the io.argoproj.workflow.v1alpha1. schedulerName string Set scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified. securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to run all pods of the workflow as. shutdown string Shutdown will shutdown the workflow according to its ShutdownStrategy suspend boolean Suspend will suspend the workflow and prevent execution of any future steps in the workflow synchronization Synchronization Synchronization holds synchronization lock configuration for this Workflow templateDefaults Template TemplateDefaults holds default template values that will apply to all templates in the Workflow, unless overridden on the template-level templates Array< Template > Templates is a list of workflow templates used in a workflow tolerations Array< Toleration > Tolerations to apply to workflow pods. ttlStrategy TTLStrategy TTLStrategy limits the lifetime of a Workflow that has finished execution depending on if it Succeeded or Failed. If this struct is set, once the Workflow finishes, it will be deleted after the time to live expires. If this field is unset, the controller config map will hold the default values. volumeClaimGC VolumeClaimGC VolumeClaimGC describes the strategy to use when deleting volumes from completed workflows volumeClaimTemplates Array< PersistentVolumeClaim > VolumeClaimTemplates is a list of claims that containers are allowed to reference. The Workflow controller will create the claims at the beginning of the workflow and delete the claims upon completion of the workflow volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a io.argoproj.workflow.v1alpha1. workflowMetadata WorkflowMetadata WorkflowMetadata contains some metadata of the workflow to refer to workflowTemplateRef WorkflowTemplateRef WorkflowTemplateRef holds a reference to a WorkflowTemplate for execution WorkflowStatus \u00b6 WorkflowStatus contains overall status information about a workflow Fields \u00b6 Field Name Field Type Description artifactGCStatus ArtGCStatus ArtifactGCStatus maintains the status of Artifact Garbage Collection artifactRepositoryRef ArtifactRepositoryRefStatus ArtifactRepositoryRef is used to cache the repository to use so we do not need to determine it everytime we reconcile. compressedNodes string Compressed and base64 decoded Nodes map conditions Array< Condition > Conditions is a list of conditions the Workflow may have estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this workflow completed message string A human readable message indicating details about why the workflow is in this condition. nodes NodeStatus Nodes is a mapping between a node ID and the node's status. offloadNodeStatusVersion string Whether on not node status has been offloaded to a database. If exists, then Nodes and CompressedNodes will be empty. This will actually be populated with a hash of the offloaded data. outputs Outputs Outputs captures output values and artifact locations produced by the workflow via global outputs persistentVolumeClaims Array< Volume > PersistentVolumeClaims tracks all PVCs that were created as part of the io.argoproj.workflow.v1alpha1. The contents of this list are drained at the end of the workflow. phase string Phase a simple, high-level summary of where the workflow is in its lifecycle. Will be \"\" (Unknown), \"Pending\", or \"Running\" before the workflow is completed, and \"Succeeded\", \"Failed\" or \"Error\" once the workflow has completed. progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is the total for the workflow startedAt Time Time at which this workflow started storedTemplates Template StoredTemplates is a mapping between a template ref and the node's status. storedWorkflowTemplateSpec WorkflowSpec StoredWorkflowSpec stores the WorkflowTemplate spec for future execution. synchronization SynchronizationStatus Synchronization stores the status of synchronization locks taskResultsCompleted Map< boolean , string > Have task results been completed? (mapped by Pod name) used to prevent premature garbage collection of artifacts. CronWorkflowSpec \u00b6 CronWorkflowSpec is the specification of a CronWorkflow Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description concurrencyPolicy string ConcurrencyPolicy is the K8s-style concurrency policy that will be used failedJobsHistoryLimit integer FailedJobsHistoryLimit is the number of failed jobs to be kept at a time schedule string Schedule is a schedule to run the Workflow in Cron format startingDeadlineSeconds integer StartingDeadlineSeconds is the K8s-style deadline that will limit the time a CronWorkflow will be run after its original scheduled time if it is missed. successfulJobsHistoryLimit integer SuccessfulJobsHistoryLimit is the number of successful jobs to be kept at a time suspend boolean Suspend is a flag that will stop new CronWorkflows from running if set to true timezone string Timezone is the timezone against which the cron schedule will be calculated, e.g. \"Asia/Tokyo\". Default is machine's local time. workflowMetadata ObjectMeta WorkflowMetadata contains some metadata of the workflow to be run workflowSpec WorkflowSpec WorkflowSpec is the spec of the workflow to be run CronWorkflowStatus \u00b6 CronWorkflowStatus is the status of a CronWorkflow Fields \u00b6 Field Name Field Type Description active Array< ObjectReference > Active is a list of active workflows stemming from this CronWorkflow conditions Array< Condition > Conditions is a list of conditions the CronWorkflow may have lastScheduledTime Time LastScheduleTime is the last time the CronWorkflow was scheduled Arguments \u00b6 Arguments to a template Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) Fields \u00b6 Field Name Field Type Description artifacts Array< Artifact > Artifacts is the list of artifacts to pass to the template or workflow parameters Array< Parameter > Parameters is the list of parameters to pass to the template or workflow WorkflowLevelArtifactGC \u00b6 WorkflowLevelArtifactGC describes how to delete artifacts from completed Workflows - this spec is used on the Workflow level Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) Fields \u00b6 Field Name Field Type Description forceFinalizerRemoval boolean ForceFinalizerRemoval: if set to true, the finalizer will be removed in the case that Artifact GC fails podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the artgc pod spec. serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use. ArtifactRepositoryRef \u00b6 No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) Fields \u00b6 Field Name Field Type Description configMap string The name of the config map. Defaults to \"artifact-repositories\". key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation. ExecutorConfig \u00b6 ExecutorConfig holds configurations of an executor container. Fields \u00b6 Field Name Field Type Description serviceAccountName string ServiceAccountName specifies the service account name of the executor container. LifecycleHook \u00b6 No description available Examples with this field (click to open) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) Fields \u00b6 Field Name Field Type Description arguments Arguments Arguments hold arguments to the template expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef is the reference to the template resource to execute by the hook Metrics \u00b6 Metrics are a list of metrics emitted from a Workflow/Template Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description prometheus Array< Prometheus > Prometheus is a list of prometheus metrics to be emitted PodGC \u00b6 PodGC describes how to delete completed pods as they complete Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) Fields \u00b6 Field Name Field Type Description deleteDelayDuration Duration DeleteDelayDuration specifies the duration before pods in the GC queue get deleted. labelSelector LabelSelector LabelSelector is the label selector to check if the pods match the labels before being added to the pod GC queue. strategy string Strategy is the strategy to use. One of \"OnPodCompletion\", \"OnPodSuccess\", \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". If unset, does not delete Pods Metadata \u00b6 Pod metdata Examples with this field (click to open) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) Fields \u00b6 Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available RetryStrategy \u00b6 RetryStrategy provides controls on how to retry a workflow step Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description affinity RetryAffinity Affinity prevents running workflow's step on the same host backoff Backoff Backoff is a backoff strategy expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString Limit is the maximum number of retry attempts when retrying a container. It does not include the original container; the maximum number of total attempts will be limit + 1 . retryPolicy string RetryPolicy is a policy of NodePhase statuses that will be retried Synchronization \u00b6 Synchronization holds synchronization lock configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description mutex Mutex Mutex holds the Mutex lock details semaphore SemaphoreRef Semaphore holds the Semaphore configuration Template \u00b6 Template is a reusable and composable unit of execution in a workflow Examples with this field (click to open) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) Fields \u00b6 Field Name Field Type Description activeDeadlineSeconds IntOrString Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates. affinity Affinity Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any) archiveLocation ArtifactLocation Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container is the main container image to run in the pod containerSet ContainerSetTemplate ContainerSet groups multiple containers within a single pod. daemon boolean Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAG template subtype which runs a DAG data Data Data is a data template executor ExecutorConfig Executor holds configurations of the executor container. failFast boolean FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases Array< HostAlias > HostAliases is an optional list of hosts and IPs that will be injected into the pod spec http HTTP HTTP makes a HTTP request initContainers Array< UserContainer > InitContainers is a list of containers which run before the main container. inputs Inputs Inputs describe what inputs parameters and artifacts are supplied to this template memoize Memoize Memoize allows templates to use outputs generated from already executed templates metadata Metadata Metdata sets the pods's metadata, i.e. annotations and labels metrics Metrics Metrics are a list of metrics emitted from this template name string Name is the name of the template nodeSelector Map< string , string > NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs describe the parameters and artifacts that this template produces parallelism integer Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin is a plugin template podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority to apply to workflow pods. priorityClassName string PriorityClassName to apply to workflow pods. resource ResourceTemplate Resource template subtype which can run k8s resources retryStrategy RetryStrategy RetryStrategy describes how to retry a template when it fails schedulerName string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. script ScriptTemplate Script runs a portion of code against an interpreter securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName to apply to workflow pods sidecars Array< UserContainer > Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes steps Array> Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate Suspend template subtype which can suspend a workflow when reaching the step synchronization Synchronization Synchronization holds synchronization lock configuration for this template timeout string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations Array< Toleration > Tolerations to apply to workflow pods. volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a template. TTLStrategy \u00b6 TTLStrategy is the strategy for the time to live depending on if the workflow succeeded or failed Examples with this field (click to open) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) Fields \u00b6 Field Name Field Type Description secondsAfterCompletion integer SecondsAfterCompletion is the number of seconds to live after completion secondsAfterFailure integer SecondsAfterFailure is the number of seconds to live after failure secondsAfterSuccess integer SecondsAfterSuccess is the number of seconds to live after success VolumeClaimGC \u00b6 VolumeClaimGC describes how to delete volumes from completed Workflows Fields \u00b6 Field Name Field Type Description strategy string Strategy is the strategy to use. One of \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". Defaults to \"OnWorkflowSuccess\" WorkflowMetadata \u00b6 No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) Fields \u00b6 Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available labelsFrom LabelValueFrom No description available WorkflowTemplateRef \u00b6 WorkflowTemplateRef is a reference to a WorkflowTemplate resource. Examples with this field (click to open) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the workflow template. ArtGCStatus \u00b6 ArtGCStatus maintains state related to ArtifactGC Fields \u00b6 Field Name Field Type Description notSpecified boolean if this is true, we already checked to see if we need to do it and we don't podsRecouped Map< boolean , string > have completed Pods been processed? (mapped by Pod name) used to prevent re-processing the Status of a Pod more than once strategiesProcessed Map< boolean , string > have Pods been started to perform this strategy? (enables us not to re-process what we've already done) ArtifactRepositoryRefStatus \u00b6 No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) Fields \u00b6 Field Name Field Type Description artifactRepository ArtifactRepository The repository the workflow will use. This maybe empty before v3.1. configMap string The name of the config map. Defaults to \"artifact-repositories\". default boolean If this ref represents the default artifact repository, rather than a config map. key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation. namespace string The namespace of the config map. Defaults to the workflow's namespace, or the controller's namespace (if found). Condition \u00b6 No description available Fields \u00b6 Field Name Field Type Description message string Message is the condition message status string Status is the status of the condition type string Type is the type of condition NodeStatus \u00b6 NodeStatus contains status information about an individual node in the workflow Fields \u00b6 Field Name Field Type Description boundaryID string BoundaryID indicates the node ID of the associated template root node in which this node belongs to children Array< string > Children is a list of child node IDs daemoned boolean Daemoned tracks whether or not this node was daemoned and need to be terminated displayName string DisplayName is a human readable representation of the node. Unique within a template boundary estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this node completed hostNodeName string HostNodeName name of the Kubernetes node on which the Pod is running, if applicable id string ID is a unique identifier of a node within the worklow It is implemented as a hash of the node name, which makes the ID deterministic inputs Inputs Inputs captures input parameter values and artifact locations supplied to this template invocation memoizationStatus MemoizationStatus MemoizationStatus holds information about cached nodes message string A human readable message indicating details about why the node is in this condition. name string Name is unique name in the node tree used to generate the node ID nodeFlag NodeFlag NodeFlag tracks some history of node. e.g.) hooked, retried, etc. outboundNodes Array< string > OutboundNodes tracks the node IDs which are considered \"outbound\" nodes to a template invocation. For every invocation of a template, there are nodes which we considered as \"outbound\". Essentially, these are last nodes in the execution sequence to run, before the template is considered completed. These nodes are then connected as parents to a following step. In the case of single pod steps (i.e. container, script, resource templates), this list will be nil since the pod itself is already considered the \"outbound\" node. In the case of DAGs, outbound nodes are the \"target\" tasks (tasks with no children). In the case of steps, outbound nodes are all the containers involved in the last step group. NOTE: since templates are composable, the list of outbound nodes are carried upwards when a DAG/steps template invokes another DAG/steps template. In other words, the outbound nodes of a template, will be a superset of the outbound nodes of its last children. outputs Outputs Outputs captures output parameter values and artifact locations produced by this template invocation phase string Phase a simple, high-level summary of where the node is in its lifecycle. Can be used as a state machine. Will be one of these values \"Pending\", \"Running\" before the node is completed, or \"Succeeded\", \"Skipped\", \"Failed\", \"Error\", or \"Omitted\" as a final state. podIP string PodIP captures the IP of the pod for daemoned steps progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is indicative, but not accurate, resource duration. This is populated when the nodes completes. startedAt Time Time at which this node started synchronizationStatus NodeSynchronizationStatus SynchronizationStatus is the synchronization status of the node templateName string TemplateName is the template name which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateRef TemplateRef TemplateRef is the reference to the template resource which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateScope string TemplateScope is the template scope in which the template of this node was retrieved. type string Type indicates type of node Outputs \u00b6 Outputs hold parameters, artifacts, and results from a step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description artifacts Array< Artifact > Artifacts holds the list of output artifacts produced by a step exitCode string ExitCode holds the exit code of a script template parameters Array< Parameter > Parameters holds the list of output parameters produced by a step result string Result holds the result (stdout) of a script template SynchronizationStatus \u00b6 SynchronizationStatus stores the status of semaphore and mutex. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description mutex MutexStatus Mutex stores this workflow's mutex holder details semaphore SemaphoreStatus Semaphore stores this workflow's Semaphore holder details Artifact \u00b6 Artifact indicates an artifact to place at a specified path Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source Parameter \u00b6 Parameter indicate a passed string parameter to a service template with an optional default value Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) Fields \u00b6 Field Name Field Type Description default string Default is the default value to use for an input parameter if a value was not supplied description string Description is the parameter description enum Array< string > Enum holds a list of string values to choose from, for the actual value of the parameter globalName string GlobalName exports an output parameter to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string Name is the parameter name value string Value is the literal value to use for the parameter. If specified in the context of an input parameter, the value takes precedence over any passed values valueFrom ValueFrom ValueFrom is the source for the output parameter's value TemplateRef \u00b6 TemplateRef is a reference of template resource. Examples with this field (click to open) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the template. template string Template is the name of referred template in the resource. Prometheus \u00b6 Prometheus is a prometheus metric to be emitted Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description counter Counter Counter is a counter metric gauge Gauge Gauge is a gauge metric help string Help is a string that describes the metric histogram Histogram Histogram is a histogram metric labels Array< MetricLabel > Labels is a list of metric labels name string Name is the name of the metric when string When is a conditional statement that decides when to emit the metric RetryAffinity \u00b6 RetryAffinity prevents running steps on the same host. Fields \u00b6 Field Name Field Type Description nodeAntiAffinity RetryNodeAntiAffinity No description available Backoff \u00b6 Backoff is a backoff strategy to use within retryStrategy Examples with this field (click to open) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) Fields \u00b6 Field Name Field Type Description duration string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString Factor is a factor to multiply the base duration after each failed retry maxDuration string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy Mutex \u00b6 Mutex holds Mutex configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description name string name of the mutex namespace string Namespace is the namespace of the mutex, default: [namespace of workflow] SemaphoreRef \u00b6 SemaphoreRef is a reference of Semaphore Fields \u00b6 Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for Semaphore configuration namespace string Namespace is the namespace of the configmap, default: [namespace of workflow] ArtifactLocation \u00b6 ArtifactLocation describes a location for a single or multiple artifacts. It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) Fields \u00b6 Field Name Field Type Description archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details oss OSSArtifact OSS contains OSS artifact location details raw RawArtifact Raw contains raw artifact location details s3 S3Artifact S3 contains S3 artifact location details ContainerSetTemplate \u00b6 No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) Fields \u00b6 Field Name Field Type Description containers Array< ContainerNode > No description available retryStrategy ContainerSetRetryStrategy RetryStrategy describes how to retry a container nodes in the container set if it fails. Nbr of retries(default 0) and sleep duration between retries(default 0s, instant retry) can be set. volumeMounts Array< VolumeMount > No description available DAGTemplate \u00b6 DAGTemplate is a template subtype for directed acyclic graph templates Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description failFast boolean This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string Target are one or more names of targets to execute in a DAG tasks Array< DAGTask > Tasks are a list of DAG tasks Data \u00b6 Data is a data template Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) Fields \u00b6 Field Name Field Type Description source DataSource Source sources external data into a data template transformation Array< TransformationStep > Transformation applies a set of transformations HTTP \u00b6 No description available Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description body string Body is content of the HTTP Request bodyFrom HTTPBodySource BodyFrom is content of the HTTP Request as Bytes headers Array< HTTPHeader > Headers are an optional list of headers to send with HTTP requests insecureSkipVerify boolean InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string Method is HTTP methods for HTTP Request successCondition string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds integer TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string URL of the HTTP Request UserContainer \u00b6 UserContainer is a container specified by a user. Examples with this field (click to open) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes mirrorVolumeMounts boolean MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. Inputs \u00b6 Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description artifacts Array< Artifact > Artifact are a list of artifacts passed as inputs parameters Array< Parameter > Parameters are a list of parameters passed as inputs Memoize \u00b6 Memoization enables caching for the Outputs of the template Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description cache Cache Cache sets and configures the kind of cache key string Key is the key to use as the caching key maxAge string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored. Plugin \u00b6 Plugin is an Object with exactly one key ResourceTemplate \u00b6 ResourceTemplate is a template subtype to manipulate kubernetes resources Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) Fields \u00b6 Field Name Field Type Description action string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags Array< string > Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom is the source for a single kubernetes manifest mergeStrategy string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step ScriptTemplate \u00b6 ScriptTemplate is a template subtype to enable scripting through code steps Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ source string Source contains the source code of the script to execute startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. WorkflowStep \u00b6 WorkflowStep is a reference to a template to execute in a series of step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description arguments Arguments Arguments hold arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name of the step ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Template is the name of the template to execute as the step templateRef TemplateRef TemplateRef is the reference to the template resource to execute as the step. when string When is an expression in which the step should conditionally execute withItems Array< Item > WithItems expands a step into multiple parallel steps from the items in the list withParam string WithParam expands a step into multiple parallel steps from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a step into a numeric sequence SuspendTemplate \u00b6 SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Examples with this field (click to open) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) Fields \u00b6 Field Name Field Type Description duration string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" LabelValueFrom \u00b6 No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) Fields \u00b6 Field Name Field Type Description expression string No description available ArtifactRepository \u00b6 ArtifactRepository represents an artifact repository in which a controller will store its artifacts Fields \u00b6 Field Name Field Type Description archiveLogs boolean ArchiveLogs enables log archiving artifactory ArtifactoryArtifactRepository Artifactory stores artifacts to JFrog Artifactory azure AzureArtifactRepository Azure stores artifact in an Azure Storage account gcs GCSArtifactRepository GCS stores artifact in a GCS object store hdfs HDFSArtifactRepository HDFS stores artifacts in HDFS oss OSSArtifactRepository OSS stores artifact in a OSS-compliant object store s3 S3ArtifactRepository S3 stores artifact in a S3-compliant object store MemoizationStatus \u00b6 MemoizationStatus is the status of this memoized node Fields \u00b6 Field Name Field Type Description cacheName string Cache is the name of the cache that was used hit boolean Hit indicates whether this node was created from a cache entry key string Key is the name of the key used for this node's cache NodeFlag \u00b6 No description available Fields \u00b6 Field Name Field Type Description hooked boolean Hooked tracks whether or not this node was triggered by hook or onExit retried boolean Retried tracks whether or not this node was retried by retryStrategy NodeSynchronizationStatus \u00b6 NodeSynchronizationStatus stores the status of a node Fields \u00b6 Field Name Field Type Description waiting string Waiting is the name of the lock that this node is waiting for MutexStatus \u00b6 MutexStatus contains which objects hold mutex locks, and which objects this workflow is waiting on to release locks. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description holding Array< MutexHolding > Holding is a list of mutexes and their respective objects that are held by mutex lock for this io.argoproj.workflow.v1alpha1. waiting Array< MutexHolding > Waiting is a list of mutexes and their respective objects this workflow is waiting for. SemaphoreStatus \u00b6 No description available Fields \u00b6 Field Name Field Type Description holding Array< SemaphoreHolding > Holding stores the list of resource acquired synchronization lock for workflows. waiting Array< SemaphoreHolding > Waiting indicates the list of current synchronization lock holders. ArchiveStrategy \u00b6 ArchiveStrategy describes how to archive files/directory when saving artifacts Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) Fields \u00b6 Field Name Field Type Description none NoneStrategy No description available tar TarStrategy No description available zip ZipStrategy No description available ArtifactGC \u00b6 ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) Fields \u00b6 Field Name Field Type Description podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use. ArtifactoryArtifact \u00b6 ArtifactoryArtifact is the location of an artifactory artifact Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) Fields \u00b6 Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password url string URL of the artifact usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username AzureArtifact \u00b6 AzureArtifact is the location of a an Azure Storage artifact Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) Fields \u00b6 Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blob string Blob is the blob name (i.e., path) in the container where the artifact resides container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. GCSArtifact \u00b6 GCSArtifact is the location of a GCS artifact Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) Fields \u00b6 Field Name Field Type Description bucket string Bucket is the name of the bucket key string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key GitArtifact \u00b6 GitArtifact is the location of an git artifact Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) Fields \u00b6 Field Name Field Type Description branch string Branch is the branch to fetch when SingleBranch is enabled depth integer Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean DisableSubmodules disables submodules during git clone fetch Array< string > Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repo string Repo is the git repository revision string Revision is the git commit, tag, branch to checkout singleBranch boolean SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SSHPrivateKeySecret is the secret selector to the repository ssh private key usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username HDFSArtifact \u00b6 HDFSArtifact is the location of an HDFS artifact Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) Fields \u00b6 Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string Path is a file path in HDFS HTTPArtifact \u00b6 HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description auth HTTPAuth Auth contains information for client authentication headers Array< Header > Headers are an optional list of headers to send with HTTP requests for artifacts url string URL of the artifact OSSArtifact \u00b6 OSSArtifact is the location of an Alibaba Cloud OSS artifact Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint key string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. RawArtifact \u00b6 RawArtifact allows raw string content to be placed as an artifact in a container Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) Fields \u00b6 Field Name Field Type Description data string Data is the string contents of the artifact S3Artifact \u00b6 S3Artifact is the location of an S3 artifact Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS key string Key is the key in the bucket where the artifact resides region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. ValueFrom \u00b6 ValueFrom describes a location in which to obtain the value to a parameter Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Fields \u00b6 Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for input parameter configuration default string Default specifies a value to be used if retrieving the value from the specified source fails event string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string JQFilter expression against the resource object in resource templates jsonPath string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom Supplied value to be filled in directly, either through the CLI, API, etc. Counter \u00b6 Counter is a Counter prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description value string Value is the value of the metric Gauge \u00b6 Gauge is a Gauge prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description operation string Operation defines the operation to apply with value and the metrics' current value realtime boolean Realtime emits this metric in real time if applicable value string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric Histogram \u00b6 Histogram is a Histogram prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description buckets Array< Amount > Buckets is a list of bucket divisors for the histogram value string Value is the value of the metric MetricLabel \u00b6 MetricLabel is a single label for a prometheus metric Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) Fields \u00b6 Field Name Field Type Description key string No description available value string No description available RetryNodeAntiAffinity \u00b6 RetryNodeAntiAffinity is a placeholder for future expansion, only empty nodeAntiAffinity is allowed. In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\". ContainerNode \u00b6 No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell dependencies Array< string > No description available env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. ContainerSetRetryStrategy \u00b6 No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description duration string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString Nbr of retries DAGTask \u00b6 DAGTask represents a node in the graph during DAG execution Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description arguments Arguments Arguments are the parameter and artifact arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified dependencies Array< string > Dependencies are name of other targets which this depends on depends string Depends are name of other targets which this depends on hooks LifecycleHook Hooks hold the lifecycle hook which is invoked at lifecycle of task, irrespective of the success, failure, or error status of the primary task inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name is the name of the target ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Name of template to execute templateRef TemplateRef TemplateRef is the reference to the template resource to execute. when string When is an expression in which the task should conditionally execute withItems Array< Item > WithItems expands a task into multiple parallel tasks from the items in the list withParam string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a task into a numeric sequence DataSource \u00b6 DataSource sources external data into a data template Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description artifactPaths ArtifactPaths ArtifactPaths is a data transformation that collects a list of artifact paths TransformationStep \u00b6 No description available Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) Fields \u00b6 Field Name Field Type Description expression string Expression defines an expr expression to apply HTTPBodySource \u00b6 HTTPBodySource contains the source of the HTTP body. Fields \u00b6 Field Name Field Type Description bytes byte No description available HTTPHeader \u00b6 No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description name string No description available value string No description available valueFrom HTTPHeaderSource No description available Cache \u00b6 Cache is the configuration for the type of cache to be used Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description configMap ConfigMapKeySelector ConfigMap sets a ConfigMap-based cache ManifestFrom \u00b6 No description available Fields \u00b6 Field Name Field Type Description artifact Artifact Artifact contains the artifact to use ContinueOn \u00b6 ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both. Examples with this field (click to open) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) Fields \u00b6 Field Name Field Type Description error boolean No description available failed boolean No description available Item \u00b6 Item expands a single workflow step into multiple parallel steps The value of Item can be a map, string, bool, or number Examples with this field (click to open) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) Sequence \u00b6 Sequence expands a workflow step into numeric range Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description count IntOrString Count is number of elements in the sequence (default: 0). Not to be used with end end IntOrString Number at which to end the sequence (default: 0). Not to be used with Count format string Format is a printf format string to format the value in the sequence start IntOrString Number at which to start the sequence (default: 0) ArtifactoryArtifactRepository \u00b6 ArtifactoryArtifactRepository defines the controller configuration for an artifactory artifact repository Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) Fields \u00b6 Field Name Field Type Description keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repoURL string RepoURL is the url for artifactory repo. usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username AzureArtifactRepository \u00b6 AzureArtifactRepository defines the controller configuration for an Azure Blob Storage artifact repository Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) Fields \u00b6 Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blobNameFormat string BlobNameFormat is defines the format of how to store blob names. Can reference workflow variables container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. GCSArtifactRepository \u00b6 GCSArtifactRepository defines the controller configuration for a GCS artifact repository Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) Fields \u00b6 Field Name Field Type Description bucket string Bucket is the name of the bucket keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key HDFSArtifactRepository \u00b6 HDFSArtifactRepository defines the controller configuration for an HDFS artifact repository Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) Fields \u00b6 Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. pathFormat string PathFormat is defines the format of path to store a file. Can reference workflow variables OSSArtifactRepository \u00b6 OSSArtifactRepository defines the controller configuration for an OSS artifact repository Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. S3ArtifactRepository \u00b6 S3ArtifactRepository defines the controller configuration for an S3 artifact repository Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. ~~ keyPrefix ~~ ~~ string ~~ ~~KeyPrefix is prefix used as part of the bucket key in which the controller will store artifacts.~~ DEPRECATED. Use KeyFormat instead region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. MutexHolding \u00b6 MutexHolding describes the mutex and the object which is holding it. Fields \u00b6 Field Name Field Type Description holder string Holder is a reference to the object which holds the Mutex. Holding Scenario: 1. Current workflow's NodeID which is holding the lock. e.g: ${NodeID} Waiting Scenario: 1. Current workflow or other workflow NodeID which is holding the lock. e.g: ${WorkflowName}/${NodeID} mutex string Reference for the mutex e.g: ${namespace}/mutex/${mutexName} SemaphoreHolding \u00b6 No description available Fields \u00b6 Field Name Field Type Description holders Array< string > Holders stores the list of current holder names in the io.argoproj.workflow.v1alpha1. semaphore string Semaphore stores the semaphore name. NoneStrategy \u00b6 NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) TarStrategy \u00b6 TarStrategy will tar and gzip the file or directory when saving Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) Fields \u00b6 Field Name Field Type Description compressionLevel integer CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression. ZipStrategy \u00b6 ZipStrategy will unzip zipped input artifacts HTTPAuth \u00b6 No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description basicAuth BasicAuth No description available clientCert ClientCertAuth No description available oauth2 OAuth2Auth No description available Header \u00b6 Header indicate a key-value request header to be used when fetching artifacts over HTTP Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description name string Name is the header name value string Value is the literal value to use for the header OSSLifecycleRule \u00b6 OSSLifecycleRule specifies how to manage bucket's lifecycle Fields \u00b6 Field Name Field Type Description markDeletionAfterDays integer MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays integer MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type CreateS3BucketOptions \u00b6 CreateS3BucketOptions options used to determine automatic automatic bucket-creation process Fields \u00b6 Field Name Field Type Description objectLocking boolean ObjectLocking Enable object locking S3EncryptionOptions \u00b6 S3EncryptionOptions used to determine encryption options during s3 operations Fields \u00b6 Field Name Field Type Description enableEncryption boolean EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector ServerSideCustomerKeySecret tells the driver to encrypt the output artifacts using SSE-C with the specified secret. SuppliedValueFrom \u00b6 SuppliedValueFrom is a placeholder for a value to be filled in directly, either through the CLI, API, etc. Examples with this field (click to open) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Amount \u00b6 Amount represent a numeric amount. Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) ArtifactPaths \u00b6 ArtifactPaths expands a step from a collection of artifacts Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) Fields \u00b6 Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source HTTPHeaderSource \u00b6 No description available Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Fields \u00b6 Field Name Field Type Description secretKeyRef SecretKeySelector No description available BasicAuth \u00b6 BasicAuth describes the secret selectors required for basic authentication Fields \u00b6 Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username ClientCertAuth \u00b6 ClientCertAuth holds necessary information for client authentication via certificates Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description clientCertSecret SecretKeySelector No description available clientKeySecret SecretKeySelector No description available OAuth2Auth \u00b6 OAuth2Auth holds all information for client authentication via OAuth2 tokens Fields \u00b6 Field Name Field Type Description clientIDSecret SecretKeySelector No description available clientSecretSecret SecretKeySelector No description available endpointParams Array< OAuth2EndpointParam > No description available scopes Array< string > No description available tokenURLSecret SecretKeySelector No description available OAuth2EndpointParam \u00b6 EndpointParam is for requesting optional fields that should be sent in the oauth request Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description key string Name is the header name value string Value is the literal value to use for the header External Fields \u00b6 ObjectMeta \u00b6 ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description annotations Map< string , string > Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations clusterName string The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request. creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers Array< string > Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels Map< string , string > Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels managedFields Array< ManagedFieldsEntry > ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences Array< OwnerReference > List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency ~~ selfLink ~~ ~~ string ~~ ~~SelfLink is a URL representing this object. Populated by the system. Read-only.~~ DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids Affinity \u00b6 Affinity is a group of affinity scheduling rules. Fields \u00b6 Field Name Field Type Description nodeAffinity NodeAffinity Describes node affinity scheduling rules for the pod. podAffinity PodAffinity Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity PodAntiAffinity Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). PodDNSConfig \u00b6 PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) Fields \u00b6 Field Name Field Type Description nameservers Array< string > A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options Array< PodDNSConfigOption > A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. searches Array< string > A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. HostAlias \u00b6 HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Fields \u00b6 Field Name Field Type Description hostnames Array< string > Hostnames for the above IP address. ip string IP address of the host file entry. LocalObjectReference \u00b6 LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Examples with this field (click to open) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) Fields \u00b6 Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names PodDisruptionBudgetSpec \u00b6 PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Examples with this field (click to open) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) Fields \u00b6 Field Name Field Type Description maxUnavailable IntOrString An eviction is allowed if at most \"maxUnavailable\" pods selected by \"selector\" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with \"minAvailable\". minAvailable IntOrString An eviction is allowed if at least \"minAvailable\" pods selected by \"selector\" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying \"100%\". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace. PodSecurityContext \u00b6 PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) Fields \u00b6 Field Name Field Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are \"OnRootMismatch\" and \"Always\". If not specified, \"Always\" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups Array< integer > A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls Array< Sysctl > Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Toleration \u00b6 The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . Fields \u00b6 Field Name Field Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - \"NoExecute\" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - \"NoSchedule\" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - \"PreferNoSchedule\" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - \"Equal\" - \"Exists\" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. PersistentVolumeClaim \u00b6 PersistentVolumeClaim is a user's request for and claim to a persistent volume Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PersistentVolumeClaimSpec Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status PersistentVolumeClaimStatus Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Volume \u00b6 Volume represents a named volume in a pod that may be accessed by any container in the pod. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) Fields \u00b6 Field Name Field Type Description awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFileVolumeSource AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs CephFSVolumeSource CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderVolumeSource Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap ConfigMapVolumeSource ConfigMap represents a configMap that should populate this volume csi CSIVolumeSource CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI DownwardAPIVolumeSource DownwardAPI represents downward API about the pod that should populate this volume emptyDir EmptyDirVolumeSource EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral EphemeralVolumeSource Ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc FCVolumeSource FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexVolumeSource FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk ~~ gitRepo ~~ ~~ GitRepoVolumeSource ~~ ~~GitRepo represents a git repository at a particular revision.~~ DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs GlusterfsVolumeSource Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIVolumeSource ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected ProjectedVolumeSource Items for all in one resources secrets, configmaps, and downward API quobyte QuobyteVolumeSource Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDVolumeSource RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOVolumeSource ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret SecretVolumeSource Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos StorageOSVolumeSource StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume VsphereVirtualDiskVolumeSource VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Time \u00b6 Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. ObjectReference \u00b6 ObjectReference contains enough information to let you inspect or modify the referred object. Fields \u00b6 Field Name Field Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids Duration \u00b6 Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) Fields \u00b6 Field Name Field Type Description duration string No description available LabelSelector \u00b6 A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) Fields \u00b6 Field Name Field Type Description matchExpressions Array< LabelSelectorRequirement > matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels Map< string , string > matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed. IntOrString \u00b6 No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Container \u00b6 A single application container that you want to run within a pod. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The docker image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - \"Always\" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - \"IfNotPresent\" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - \"Never\" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - \"FallbackToLogsOnError\" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - \"File\" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. ConfigMapKeySelector \u00b6 Selects a key from a ConfigMap. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) Fields \u00b6 Field Name Field Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined VolumeMount \u00b6 VolumeMount describes a mounting of a Volume within a container. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. EnvVar \u00b6 EnvVar represents an environment variable present in a Container. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) Fields \u00b6 Field Name Field Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty. EnvFromSource \u00b6 EnvFromSource represents the source of a set of ConfigMaps Fields \u00b6 Field Name Field Type Description configMapRef ConfigMapEnvSource The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef SecretEnvSource The Secret to select from Lifecycle \u00b6 Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Fields \u00b6 Field Name Field Type Description postStart LifecycleHandler PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop LifecycleHandler PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Probe \u00b6 Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Fields \u00b6 Field Name Field Type Description exec ExecAction Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc GRPCAction GRPC specifies an action involving a GRPC port. This is an alpha field and requires enabling GRPCContainerProbe feature gate. httpGet HTTPGetAction HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket TCPSocketAction TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ContainerPort \u00b6 ContainerPort represents a network port in a single container. Fields \u00b6 Field Name Field Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to \"TCP\". Possible enum values: - \"SCTP\" is the SCTP protocol. - \"TCP\" is the TCP protocol. - \"UDP\" is the UDP protocol. ResourceRequirements \u00b6 ResourceRequirements describes the compute resource requirements. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description limits Quantity Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests Quantity Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ SecurityContext \u00b6 SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) Fields \u00b6 Field Name Field Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities Capabilities The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. VolumeDevice \u00b6 volumeDevice describes a mapping of a raw block device within a container. Fields \u00b6 Field Name Field Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod SecretKeySelector \u00b6 SecretKeySelector selects a key of a Secret. Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) Fields \u00b6 Field Name Field Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined ManagedFieldsEntry \u00b6 ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type. manager string Manager is an identifier of the workflow managing these fields. operation string Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'. subresource string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply' OwnerReference \u00b6 OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Fields \u00b6 Field Name Field Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids NodeAffinity \u00b6 Node affinity is a group of node affinity scheduling rules. Fields \u00b6 Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< PreferredSchedulingTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution NodeSelector If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. PodAffinity \u00b6 Pod affinity is a group of inter pod affinity scheduling rules. Fields \u00b6 Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. PodAntiAffinity \u00b6 Pod anti affinity is a group of inter pod anti affinity scheduling rules. Fields \u00b6 Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. PodDNSConfigOption \u00b6 PodDNSConfigOption defines DNS resolver options of a pod. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) Fields \u00b6 Field Name Field Type Description name string Required. value string No description available SELinuxOptions \u00b6 SELinuxOptions are the labels to be applied to the container Fields \u00b6 Field Name Field Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. SeccompProfile \u00b6 SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Fields \u00b6 Field Name Field Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - \"Localhost\" indicates a profile defined in a file on the node should be used. The file's location relative to /seccomp. - \"RuntimeDefault\" represents the default container runtime seccomp profile. - \"Unconfined\" indicates no seccomp profile is applied (A.K.A. unconfined). Sysctl \u00b6 Sysctl defines a kernel parameter to be set Fields \u00b6 Field Name Field Type Description name string Name of a property to set value string Value of a property to set WindowsSecurityContextOptions \u00b6 WindowsSecurityContextOptions contain Windows-specific options and credentials. Fields \u00b6 Field Name Field Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. PersistentVolumeClaimSpec \u00b6 PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description accessModes Array< string > AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource TypedLocalObjectReference This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef TypedLocalObjectReference Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources ResourceRequirements Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector LabelSelector A label query over volumes to consider for binding. storageClassName string Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string VolumeName is the binding reference to the PersistentVolume backing this claim. PersistentVolumeClaimStatus \u00b6 PersistentVolumeClaimStatus is the current status of a persistent volume claim. Fields \u00b6 Field Name Field Type Description accessModes Array< string > AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources Quantity The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity Quantity Represents the actual resources of the underlying volume. conditions Array< PersistentVolumeClaimCondition > Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. phase string Phase represents the current phase of PersistentVolumeClaim. Possible enum values: - \"Bound\" used for PersistentVolumeClaims that are bound - \"Lost\" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - \"Pending\" used for PersistentVolumeClaims that are not yet bound resizeStatus string ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. AWSElasticBlockStoreVolumeSource \u00b6 Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). readOnly boolean Specify \"true\" to force and set the ReadOnly property in VolumeMounts to \"true\". If omitted, the default is \"false\". More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string Unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore AzureDiskVolumeSource \u00b6 AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Fields \u00b6 Field Name Field Type Description cachingMode string Host Caching mode: None, Read Only, Read Write. diskName string The Name of the data disk in the blob storage diskURI string The URI the data disk in the blob storage fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. kind string Expected values Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. AzureFileVolumeSource \u00b6 AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Fields \u00b6 Field Name Field Type Description readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string the name of secret that contains Azure Storage Account Name and Key shareName string Share Name CephFSVolumeSource \u00b6 Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description monitors Array< string > Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef LocalObjectReference Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it CinderVolumeSource \u00b6 Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef LocalObjectReference Optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volume id used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md ConfigMapVolumeSource \u00b6 Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined CSIVolumeSource \u00b6 Represents a source location of a volume to mount, managed by an external CSI driver Fields \u00b6 Field Name Field Type Description driver string Driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string Filesystem type to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference NodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean Specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes Map< string , string > VolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. DownwardAPIVolumeSource \u00b6 DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< DownwardAPIVolumeFile > Items is a list of downward API volume file EmptyDirVolumeSource \u00b6 Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) Fields \u00b6 Field Name Field Type Description medium string What type of storage medium should back this directory. The default is \"\" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir EphemeralVolumeSource \u00b6 Represents an ephemeral volume that is handled by a normal storage driver. Fields \u00b6 Field Name Field Type Description volumeClaimTemplate PersistentVolumeClaimTemplate Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be - where is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. FCVolumeSource \u00b6 Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. lun integer Optional: FC target lun number readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs Array< string > Optional: FC target worldwide names (WWNs) wwids Array< string > Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. FlexVolumeSource \u00b6 FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Fields \u00b6 Field Name Field Type Description driver string Driver is the name of the driver to use for this volume. fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. options Map< string , string > Optional: Extra command options if any. readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. FlockerVolumeSource \u00b6 Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description datasetName string Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated datasetUUID string UUID of the dataset. This is unique identifier of a Flocker dataset GCEPersistentDiskVolumeSource \u00b6 Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string Unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk GitRepoVolumeSource \u00b6 Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Fields \u00b6 Field Name Field Type Description directory string Target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string Repository URL revision string Commit hash for the specified revision. GlusterfsVolumeSource \u00b6 Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description endpoints string EndpointsName is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string Path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean ReadOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod HostPathVolumeSource \u00b6 Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description path string Path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string Type for HostPath Volume Defaults to \"\" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath ISCSIVolumeSource \u00b6 Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description chapAuthDiscovery boolean whether support iSCSI Discovery CHAP authentication chapAuthSession boolean whether support iSCSI Session CHAP authentication fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string Custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. iqn string Target iSCSI Qualified Name. iscsiInterface string iSCSI Interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer iSCSI Target Lun number. portals Array< string > iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef LocalObjectReference CHAP Secret for iSCSI target and initiator authentication targetPortal string iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). NFSVolumeSource \u00b6 Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description path string Path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean ReadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string Server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs PersistentVolumeClaimVolumeSource \u00b6 PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Examples with this field (click to open) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) Fields \u00b6 Field Name Field Type Description claimName string ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean Will force the ReadOnly setting in VolumeMounts. Default false. PhotonPersistentDiskVolumeSource \u00b6 Represents a Photon Controller persistent disk resource. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string ID that identifies Photon Controller persistent disk PortworxVolumeSource \u00b6 PortworxVolumeSource represents a Portworx volume resource. Fields \u00b6 Field Name Field Type Description fsType string FSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string VolumeID uniquely identifies a Portworx volume ProjectedVolumeSource \u00b6 Represents a projected volume source Fields \u00b6 Field Name Field Type Description defaultMode integer Mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources Array< VolumeProjection > list of volume projections QuobyteVolumeSource \u00b6 Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description group string Group to map volume access to Default is no group readOnly boolean ReadOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string Registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string Tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string User to map volume access to Defaults to serivceaccount user volume string Volume is a string that references an already created Quobyte volume by name. RBDVolumeSource \u00b6 Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string The rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string Keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors Array< string > A collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string The rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef LocalObjectReference SecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string The rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it ScaleIOVolumeSource \u00b6 ScaleIOVolumeSource represents a persistent ScaleIO volume Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". gateway string The host address of the ScaleIO API Gateway. protectionDomain string The name of the ScaleIO Protection Domain for the configured storage. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean Flag to enable/disable SSL communication with Gateway, default false storageMode string Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string The ScaleIO Storage Pool associated with the protection domain. system string The name of the storage system as configured in ScaleIO. volumeName string The name of a volume already created in the ScaleIO system that is associated with this volume source. SecretVolumeSource \u00b6 Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) Fields \u00b6 Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean Specify whether the Secret or its keys must be defined secretName string Name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret StorageOSVolumeSource \u00b6 Represents a StorageOS persistent volume resource. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string VolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string VolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. VsphereVirtualDiskVolumeSource \u00b6 Represents a vSphere volume resource. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. storagePolicyID string Storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string Storage Policy Based Management (SPBM) profile name. volumePath string Path that identifies vSphere volume vmdk LabelSelectorRequirement \u00b6 A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Fields \u00b6 Field Name Field Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values Array< string > values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. EnvVarSource \u00b6 EnvVarSource represents a source for the value of an EnvVar. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Fields \u00b6 Field Name Field Type Description configMapKeyRef ConfigMapKeySelector Selects a key of a ConfigMap. fieldRef ObjectFieldSelector Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels[''] , metadata.annotations[''] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef SecretKeySelector Selects a key of a secret in the pod's namespace ConfigMapEnvSource \u00b6 ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Fields \u00b6 Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined SecretEnvSource \u00b6 SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Fields \u00b6 Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined LifecycleHandler \u00b6 LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Fields \u00b6 Field Name Field Type Description exec ExecAction Exec specifies the action to take. httpGet HTTPGetAction HTTPGet specifies the http request to perform. tcpSocket TCPSocketAction Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. ExecAction \u00b6 ExecAction describes a \"run in container\" action. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) Fields \u00b6 Field Name Field Type Description command Array< string > Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions (' GRPCAction \u00b6 No description available Fields \u00b6 Field Name Field Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC. HTTPGetAction \u00b6 HTTPGetAction describes an action based on HTTP Get requests. Examples with this field (click to open) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) Fields \u00b6 Field Name Field Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. httpHeaders Array< HTTPHeader > Custom headers to set in the request. HTTP allows repeated headers. path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - \"HTTP\" means that the scheme used will be http:// - \"HTTPS\" means that the scheme used will be https:// TCPSocketAction \u00b6 TCPSocketAction describes an action based on opening a socket Fields \u00b6 Field Name Field Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. Quantity \u00b6 Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) Capabilities \u00b6 Adds and removes POSIX capabilities from running containers. Fields \u00b6 Field Name Field Type Description add Array< string > Added capabilities drop Array< string > Removed capabilities FieldsV1 \u00b6 FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format. Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff PreferredSchedulingTerm \u00b6 An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Fields \u00b6 Field Name Field Type Description preference NodeSelectorTerm A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. NodeSelector \u00b6 A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Fields \u00b6 Field Name Field Type Description nodeSelectorTerms Array< NodeSelectorTerm > Required. A list of node selector terms. The terms are ORed. WeightedPodAffinityTerm \u00b6 The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Fields \u00b6 Field Name Field Type Description podAffinityTerm PodAffinityTerm Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. PodAffinityTerm \u00b6 Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running Fields \u00b6 Field Name Field Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces. This field is beta-level and is only honored when PodAffinityNamespaceSelector feature is enabled. namespaces Array< string > namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\" topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. TypedLocalObjectReference \u00b6 TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Fields \u00b6 Field Name Field Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced PersistentVolumeClaimCondition \u00b6 PersistentVolumeClaimCondition contails details about state of pvc Fields \u00b6 Field Name Field Type Description lastProbeTime Time Last time we probed the condition. lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports \"ResizeStarted\" that means the underlying persistent volume is being resized. status string No description available type string Possible enum values: - \"FileSystemResizePending\" - controller resize is finished and a file system resize is pending on node - \"Resizing\" - a user trigger resize of pvc has been started KeyToPath \u00b6 Maps a string key to a path within a volume. Fields \u00b6 Field Name Field Type Description key string The key to project. mode integer Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string The relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. DownwardAPIVolumeFile \u00b6 DownwardAPIVolumeFile represents information to create the file containing the pod field Fields \u00b6 Field Name Field Type Description fieldRef ObjectFieldSelector Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. PersistentVolumeClaimTemplate \u00b6 PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Fields \u00b6 Field Name Field Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec PersistentVolumeClaimSpec The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. VolumeProjection \u00b6 Projection that may be projected along with other supported volume types Fields \u00b6 Field Name Field Type Description configMap ConfigMapProjection information about the configMap data to project downwardAPI DownwardAPIProjection information about the downwardAPI data to project secret SecretProjection information about the secret data to project serviceAccountToken ServiceAccountTokenProjection information about the serviceAccountToken data to project ObjectFieldSelector \u00b6 ObjectFieldSelector selects an APIVersioned field of an object. Fields \u00b6 Field Name Field Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". fieldPath string Path of the field to select in the specified API version. ResourceFieldSelector \u00b6 ResourceFieldSelector represents container resources (cpu, memory) and their output format Fields \u00b6 Field Name Field Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to \"1\" resource string Required: resource to select HTTPHeader \u00b6 HTTPHeader describes a custom header to be used in HTTP probes Fields \u00b6 Field Name Field Type Description name string The header field name value string The header field value NodeSelectorTerm \u00b6 A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Fields \u00b6 Field Name Field Type Description matchExpressions Array< NodeSelectorRequirement > A list of node selector requirements by node's labels. matchFields Array< NodeSelectorRequirement > A list of node selector requirements by node's fields. ConfigMapProjection \u00b6 Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined DownwardAPIProjection \u00b6 Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Fields \u00b6 Field Name Field Type Description items Array< DownwardAPIVolumeFile > Items is a list of DownwardAPIVolume file SecretProjection \u00b6 Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) Fields \u00b6 Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined ServiceAccountTokenProjection \u00b6 ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Fields \u00b6 Field Name Field Type Description audience string Audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string Path is the path relative to the mount point of the file to project the token into. NodeSelectorRequirement \u00b6 A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Fields \u00b6 Field Name Field Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - \"DoesNotExist\" - \"Exists\" - \"Gt\" - \"In\" - \"Lt\" - \"NotIn\" values Array< string > An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.","title":"Field Reference"},{"location":"fields/#field-reference","text":"","title":"Field Reference"},{"location":"fields/#workflow","text":"Workflow is the definition of a workflow resource Examples (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`daemoned-stateful-set-with-service.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemoned-stateful-set-with-service.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-jobs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-jobs.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-orchestration.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-orchestration.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch-basic.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch-basic.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-resource-log-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-resource-log-selector.yaml) - [`k8s-set-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-set-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resource-delete-with-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-delete-with-flags.yaml) - [`resource-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-flags.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"Workflow"},{"location":"fields/#fields","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available status WorkflowStatus No description available","title":"Fields"},{"location":"fields/#cronworkflow","text":"CronWorkflow is the definition of a scheduled workflow resource Examples (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml)","title":"CronWorkflow"},{"location":"fields/#fields_1","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec CronWorkflowSpec No description available status CronWorkflowStatus No description available","title":"Fields"},{"location":"fields/#workflowtemplate","text":"WorkflowTemplate is the definition of a workflow template resource Examples (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"WorkflowTemplate"},{"location":"fields/#fields_2","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available","title":"Fields"},{"location":"fields/#workflowspec","text":"WorkflowSpec is the specification of a Workflow. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"WorkflowSpec"},{"location":"fields/#fields_3","text":"Field Name Field Type Description activeDeadlineSeconds integer Optional duration in seconds relative to the workflow start time which the workflow is allowed to run before the controller terminates the io.argoproj.workflow.v1alpha1. A value of zero is used to terminate a Running workflow affinity Affinity Affinity sets the scheduling constraints for all pods in the io.argoproj.workflow.v1alpha1. Can be overridden by an affinity specified in the template archiveLogs boolean ArchiveLogs indicates if the container logs should be archived arguments Arguments Arguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{io.argoproj.workflow.v1alpha1.parameters.myparam}} artifactGC WorkflowLevelArtifactGC ArtifactGC describes the strategy to use when deleting artifacts from completed or deleted workflows (applies to all output Artifacts unless Artifact.ArtifactGC is specified, which overrides this) artifactRepositoryRef ArtifactRepositoryRef ArtifactRepositoryRef specifies the configMap name and key containing the artifact repository config. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. dnsConfig PodDNSConfig PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. entrypoint string Entrypoint is a template reference to the starting point of the io.argoproj.workflow.v1alpha1. executor ExecutorConfig Executor holds configurations of executor containers of the io.argoproj.workflow.v1alpha1. hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step hostAliases Array< HostAlias > No description available hostNetwork boolean Host networking requested for this workflow pod. Default to false. imagePullSecrets Array< LocalObjectReference > ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod metrics Metrics Metrics are a list of metrics emitted from this Workflow nodeSelector Map< string , string > NodeSelector is a selector which will result in all pods of the workflow to be scheduled on the selected node(s). This is able to be overridden by a nodeSelector specified in the template. onExit string OnExit is a template reference which is invoked at the end of the workflow, irrespective of the success, failure, or error of the primary io.argoproj.workflow.v1alpha1. parallelism integer Parallelism limits the max total parallel pods that can execute at the same time in a workflow podDisruptionBudget PodDisruptionBudgetSpec PodDisruptionBudget holds the number of concurrent disruptions that you allow for Workflow's Pods. Controller will automatically add the selector with workflow name, if selector is empty. Optional: Defaults to empty. podGC PodGC PodGC describes the strategy to use when deleting completed pods podMetadata Metadata PodMetadata defines additional metadata that should be applied to workflow pods ~~ podPriority ~~ ~~ integer ~~ ~~Priority to apply to workflow pods.~~ DEPRECATED: Use PodPriorityClassName instead. podPriorityClassName string PriorityClassName to apply to workflow pods. podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first. retryStrategy RetryStrategy RetryStrategy for all templates in the io.argoproj.workflow.v1alpha1. schedulerName string Set scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified. securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to run all pods of the workflow as. shutdown string Shutdown will shutdown the workflow according to its ShutdownStrategy suspend boolean Suspend will suspend the workflow and prevent execution of any future steps in the workflow synchronization Synchronization Synchronization holds synchronization lock configuration for this Workflow templateDefaults Template TemplateDefaults holds default template values that will apply to all templates in the Workflow, unless overridden on the template-level templates Array< Template > Templates is a list of workflow templates used in a workflow tolerations Array< Toleration > Tolerations to apply to workflow pods. ttlStrategy TTLStrategy TTLStrategy limits the lifetime of a Workflow that has finished execution depending on if it Succeeded or Failed. If this struct is set, once the Workflow finishes, it will be deleted after the time to live expires. If this field is unset, the controller config map will hold the default values. volumeClaimGC VolumeClaimGC VolumeClaimGC describes the strategy to use when deleting volumes from completed workflows volumeClaimTemplates Array< PersistentVolumeClaim > VolumeClaimTemplates is a list of claims that containers are allowed to reference. The Workflow controller will create the claims at the beginning of the workflow and delete the claims upon completion of the workflow volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a io.argoproj.workflow.v1alpha1. workflowMetadata WorkflowMetadata WorkflowMetadata contains some metadata of the workflow to refer to workflowTemplateRef WorkflowTemplateRef WorkflowTemplateRef holds a reference to a WorkflowTemplate for execution","title":"Fields"},{"location":"fields/#workflowstatus","text":"WorkflowStatus contains overall status information about a workflow","title":"WorkflowStatus"},{"location":"fields/#fields_4","text":"Field Name Field Type Description artifactGCStatus ArtGCStatus ArtifactGCStatus maintains the status of Artifact Garbage Collection artifactRepositoryRef ArtifactRepositoryRefStatus ArtifactRepositoryRef is used to cache the repository to use so we do not need to determine it everytime we reconcile. compressedNodes string Compressed and base64 decoded Nodes map conditions Array< Condition > Conditions is a list of conditions the Workflow may have estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this workflow completed message string A human readable message indicating details about why the workflow is in this condition. nodes NodeStatus Nodes is a mapping between a node ID and the node's status. offloadNodeStatusVersion string Whether on not node status has been offloaded to a database. If exists, then Nodes and CompressedNodes will be empty. This will actually be populated with a hash of the offloaded data. outputs Outputs Outputs captures output values and artifact locations produced by the workflow via global outputs persistentVolumeClaims Array< Volume > PersistentVolumeClaims tracks all PVCs that were created as part of the io.argoproj.workflow.v1alpha1. The contents of this list are drained at the end of the workflow. phase string Phase a simple, high-level summary of where the workflow is in its lifecycle. Will be \"\" (Unknown), \"Pending\", or \"Running\" before the workflow is completed, and \"Succeeded\", \"Failed\" or \"Error\" once the workflow has completed. progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is the total for the workflow startedAt Time Time at which this workflow started storedTemplates Template StoredTemplates is a mapping between a template ref and the node's status. storedWorkflowTemplateSpec WorkflowSpec StoredWorkflowSpec stores the WorkflowTemplate spec for future execution. synchronization SynchronizationStatus Synchronization stores the status of synchronization locks taskResultsCompleted Map< boolean , string > Have task results been completed? (mapped by Pod name) used to prevent premature garbage collection of artifacts.","title":"Fields"},{"location":"fields/#cronworkflowspec","text":"CronWorkflowSpec is the specification of a CronWorkflow Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"CronWorkflowSpec"},{"location":"fields/#fields_5","text":"Field Name Field Type Description concurrencyPolicy string ConcurrencyPolicy is the K8s-style concurrency policy that will be used failedJobsHistoryLimit integer FailedJobsHistoryLimit is the number of failed jobs to be kept at a time schedule string Schedule is a schedule to run the Workflow in Cron format startingDeadlineSeconds integer StartingDeadlineSeconds is the K8s-style deadline that will limit the time a CronWorkflow will be run after its original scheduled time if it is missed. successfulJobsHistoryLimit integer SuccessfulJobsHistoryLimit is the number of successful jobs to be kept at a time suspend boolean Suspend is a flag that will stop new CronWorkflows from running if set to true timezone string Timezone is the timezone against which the cron schedule will be calculated, e.g. \"Asia/Tokyo\". Default is machine's local time. workflowMetadata ObjectMeta WorkflowMetadata contains some metadata of the workflow to be run workflowSpec WorkflowSpec WorkflowSpec is the spec of the workflow to be run","title":"Fields"},{"location":"fields/#cronworkflowstatus","text":"CronWorkflowStatus is the status of a CronWorkflow","title":"CronWorkflowStatus"},{"location":"fields/#fields_6","text":"Field Name Field Type Description active Array< ObjectReference > Active is a list of active workflows stemming from this CronWorkflow conditions Array< Condition > Conditions is a list of conditions the CronWorkflow may have lastScheduledTime Time LastScheduleTime is the last time the CronWorkflow was scheduled","title":"Fields"},{"location":"fields/#arguments","text":"Arguments to a template Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml)","title":"Arguments"},{"location":"fields/#fields_7","text":"Field Name Field Type Description artifacts Array< Artifact > Artifacts is the list of artifacts to pass to the template or workflow parameters Array< Parameter > Parameters is the list of parameters to pass to the template or workflow","title":"Fields"},{"location":"fields/#workflowlevelartifactgc","text":"WorkflowLevelArtifactGC describes how to delete artifacts from completed Workflows - this spec is used on the Workflow level Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml)","title":"WorkflowLevelArtifactGC"},{"location":"fields/#fields_8","text":"Field Name Field Type Description forceFinalizerRemoval boolean ForceFinalizerRemoval: if set to true, the finalizer will be removed in the case that Artifact GC fails podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the artgc pod spec. serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use.","title":"Fields"},{"location":"fields/#artifactrepositoryref","text":"No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml)","title":"ArtifactRepositoryRef"},{"location":"fields/#fields_9","text":"Field Name Field Type Description configMap string The name of the config map. Defaults to \"artifact-repositories\". key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation.","title":"Fields"},{"location":"fields/#executorconfig","text":"ExecutorConfig holds configurations of an executor container.","title":"ExecutorConfig"},{"location":"fields/#fields_10","text":"Field Name Field Type Description serviceAccountName string ServiceAccountName specifies the service account name of the executor container.","title":"Fields"},{"location":"fields/#lifecyclehook","text":"No description available Examples with this field (click to open) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml)","title":"LifecycleHook"},{"location":"fields/#fields_11","text":"Field Name Field Type Description arguments Arguments Arguments hold arguments to the template expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef is the reference to the template resource to execute by the hook","title":"Fields"},{"location":"fields/#metrics","text":"Metrics are a list of metrics emitted from a Workflow/Template Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Metrics"},{"location":"fields/#fields_12","text":"Field Name Field Type Description prometheus Array< Prometheus > Prometheus is a list of prometheus metrics to be emitted","title":"Fields"},{"location":"fields/#podgc","text":"PodGC describes how to delete completed pods as they complete Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml)","title":"PodGC"},{"location":"fields/#fields_13","text":"Field Name Field Type Description deleteDelayDuration Duration DeleteDelayDuration specifies the duration before pods in the GC queue get deleted. labelSelector LabelSelector LabelSelector is the label selector to check if the pods match the labels before being added to the pod GC queue. strategy string Strategy is the strategy to use. One of \"OnPodCompletion\", \"OnPodSuccess\", \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". If unset, does not delete Pods","title":"Fields"},{"location":"fields/#metadata","text":"Pod metdata Examples with this field (click to open) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml)","title":"Metadata"},{"location":"fields/#fields_14","text":"Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available","title":"Fields"},{"location":"fields/#retrystrategy","text":"RetryStrategy provides controls on how to retry a workflow step Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"RetryStrategy"},{"location":"fields/#fields_15","text":"Field Name Field Type Description affinity RetryAffinity Affinity prevents running workflow's step on the same host backoff Backoff Backoff is a backoff strategy expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString Limit is the maximum number of retry attempts when retrying a container. It does not include the original container; the maximum number of total attempts will be limit + 1 . retryPolicy string RetryPolicy is a policy of NodePhase statuses that will be retried","title":"Fields"},{"location":"fields/#synchronization","text":"Synchronization holds synchronization lock configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"Synchronization"},{"location":"fields/#fields_16","text":"Field Name Field Type Description mutex Mutex Mutex holds the Mutex lock details semaphore SemaphoreRef Semaphore holds the Semaphore configuration","title":"Fields"},{"location":"fields/#template","text":"Template is a reusable and composable unit of execution in a workflow Examples with this field (click to open) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml)","title":"Template"},{"location":"fields/#fields_17","text":"Field Name Field Type Description activeDeadlineSeconds IntOrString Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates. affinity Affinity Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any) archiveLocation ArtifactLocation Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container is the main container image to run in the pod containerSet ContainerSetTemplate ContainerSet groups multiple containers within a single pod. daemon boolean Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAG template subtype which runs a DAG data Data Data is a data template executor ExecutorConfig Executor holds configurations of the executor container. failFast boolean FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases Array< HostAlias > HostAliases is an optional list of hosts and IPs that will be injected into the pod spec http HTTP HTTP makes a HTTP request initContainers Array< UserContainer > InitContainers is a list of containers which run before the main container. inputs Inputs Inputs describe what inputs parameters and artifacts are supplied to this template memoize Memoize Memoize allows templates to use outputs generated from already executed templates metadata Metadata Metdata sets the pods's metadata, i.e. annotations and labels metrics Metrics Metrics are a list of metrics emitted from this template name string Name is the name of the template nodeSelector Map< string , string > NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs describe the parameters and artifacts that this template produces parallelism integer Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin is a plugin template podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority to apply to workflow pods. priorityClassName string PriorityClassName to apply to workflow pods. resource ResourceTemplate Resource template subtype which can run k8s resources retryStrategy RetryStrategy RetryStrategy describes how to retry a template when it fails schedulerName string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. script ScriptTemplate Script runs a portion of code against an interpreter securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName to apply to workflow pods sidecars Array< UserContainer > Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes steps Array> Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate Suspend template subtype which can suspend a workflow when reaching the step synchronization Synchronization Synchronization holds synchronization lock configuration for this template timeout string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations Array< Toleration > Tolerations to apply to workflow pods. volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a template.","title":"Fields"},{"location":"fields/#ttlstrategy","text":"TTLStrategy is the strategy for the time to live depending on if the workflow succeeded or failed Examples with this field (click to open) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml)","title":"TTLStrategy"},{"location":"fields/#fields_18","text":"Field Name Field Type Description secondsAfterCompletion integer SecondsAfterCompletion is the number of seconds to live after completion secondsAfterFailure integer SecondsAfterFailure is the number of seconds to live after failure secondsAfterSuccess integer SecondsAfterSuccess is the number of seconds to live after success","title":"Fields"},{"location":"fields/#volumeclaimgc","text":"VolumeClaimGC describes how to delete volumes from completed Workflows","title":"VolumeClaimGC"},{"location":"fields/#fields_19","text":"Field Name Field Type Description strategy string Strategy is the strategy to use. One of \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". Defaults to \"OnWorkflowSuccess\"","title":"Fields"},{"location":"fields/#workflowmetadata","text":"No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml)","title":"WorkflowMetadata"},{"location":"fields/#fields_20","text":"Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available labelsFrom LabelValueFrom No description available","title":"Fields"},{"location":"fields/#workflowtemplateref","text":"WorkflowTemplateRef is a reference to a WorkflowTemplate resource. Examples with this field (click to open) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"WorkflowTemplateRef"},{"location":"fields/#fields_21","text":"Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the workflow template.","title":"Fields"},{"location":"fields/#artgcstatus","text":"ArtGCStatus maintains state related to ArtifactGC","title":"ArtGCStatus"},{"location":"fields/#fields_22","text":"Field Name Field Type Description notSpecified boolean if this is true, we already checked to see if we need to do it and we don't podsRecouped Map< boolean , string > have completed Pods been processed? (mapped by Pod name) used to prevent re-processing the Status of a Pod more than once strategiesProcessed Map< boolean , string > have Pods been started to perform this strategy? (enables us not to re-process what we've already done)","title":"Fields"},{"location":"fields/#artifactrepositoryrefstatus","text":"No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml)","title":"ArtifactRepositoryRefStatus"},{"location":"fields/#fields_23","text":"Field Name Field Type Description artifactRepository ArtifactRepository The repository the workflow will use. This maybe empty before v3.1. configMap string The name of the config map. Defaults to \"artifact-repositories\". default boolean If this ref represents the default artifact repository, rather than a config map. key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation. namespace string The namespace of the config map. Defaults to the workflow's namespace, or the controller's namespace (if found).","title":"Fields"},{"location":"fields/#condition","text":"No description available","title":"Condition"},{"location":"fields/#fields_24","text":"Field Name Field Type Description message string Message is the condition message status string Status is the status of the condition type string Type is the type of condition","title":"Fields"},{"location":"fields/#nodestatus","text":"NodeStatus contains status information about an individual node in the workflow","title":"NodeStatus"},{"location":"fields/#fields_25","text":"Field Name Field Type Description boundaryID string BoundaryID indicates the node ID of the associated template root node in which this node belongs to children Array< string > Children is a list of child node IDs daemoned boolean Daemoned tracks whether or not this node was daemoned and need to be terminated displayName string DisplayName is a human readable representation of the node. Unique within a template boundary estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this node completed hostNodeName string HostNodeName name of the Kubernetes node on which the Pod is running, if applicable id string ID is a unique identifier of a node within the worklow It is implemented as a hash of the node name, which makes the ID deterministic inputs Inputs Inputs captures input parameter values and artifact locations supplied to this template invocation memoizationStatus MemoizationStatus MemoizationStatus holds information about cached nodes message string A human readable message indicating details about why the node is in this condition. name string Name is unique name in the node tree used to generate the node ID nodeFlag NodeFlag NodeFlag tracks some history of node. e.g.) hooked, retried, etc. outboundNodes Array< string > OutboundNodes tracks the node IDs which are considered \"outbound\" nodes to a template invocation. For every invocation of a template, there are nodes which we considered as \"outbound\". Essentially, these are last nodes in the execution sequence to run, before the template is considered completed. These nodes are then connected as parents to a following step. In the case of single pod steps (i.e. container, script, resource templates), this list will be nil since the pod itself is already considered the \"outbound\" node. In the case of DAGs, outbound nodes are the \"target\" tasks (tasks with no children). In the case of steps, outbound nodes are all the containers involved in the last step group. NOTE: since templates are composable, the list of outbound nodes are carried upwards when a DAG/steps template invokes another DAG/steps template. In other words, the outbound nodes of a template, will be a superset of the outbound nodes of its last children. outputs Outputs Outputs captures output parameter values and artifact locations produced by this template invocation phase string Phase a simple, high-level summary of where the node is in its lifecycle. Can be used as a state machine. Will be one of these values \"Pending\", \"Running\" before the node is completed, or \"Succeeded\", \"Skipped\", \"Failed\", \"Error\", or \"Omitted\" as a final state. podIP string PodIP captures the IP of the pod for daemoned steps progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is indicative, but not accurate, resource duration. This is populated when the nodes completes. startedAt Time Time at which this node started synchronizationStatus NodeSynchronizationStatus SynchronizationStatus is the synchronization status of the node templateName string TemplateName is the template name which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateRef TemplateRef TemplateRef is the reference to the template resource which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateScope string TemplateScope is the template scope in which the template of this node was retrieved. type string Type indicates type of node","title":"Fields"},{"location":"fields/#outputs","text":"Outputs hold parameters, artifacts, and results from a step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"Outputs"},{"location":"fields/#fields_26","text":"Field Name Field Type Description artifacts Array< Artifact > Artifacts holds the list of output artifacts produced by a step exitCode string ExitCode holds the exit code of a script template parameters Array< Parameter > Parameters holds the list of output parameters produced by a step result string Result holds the result (stdout) of a script template","title":"Fields"},{"location":"fields/#synchronizationstatus","text":"SynchronizationStatus stores the status of semaphore and mutex. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"SynchronizationStatus"},{"location":"fields/#fields_27","text":"Field Name Field Type Description mutex MutexStatus Mutex stores this workflow's mutex holder details semaphore SemaphoreStatus Semaphore stores this workflow's Semaphore holder details","title":"Fields"},{"location":"fields/#artifact","text":"Artifact indicates an artifact to place at a specified path Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"Artifact"},{"location":"fields/#fields_28","text":"Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source","title":"Fields"},{"location":"fields/#parameter","text":"Parameter indicate a passed string parameter to a service template with an optional default value Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml)","title":"Parameter"},{"location":"fields/#fields_29","text":"Field Name Field Type Description default string Default is the default value to use for an input parameter if a value was not supplied description string Description is the parameter description enum Array< string > Enum holds a list of string values to choose from, for the actual value of the parameter globalName string GlobalName exports an output parameter to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string Name is the parameter name value string Value is the literal value to use for the parameter. If specified in the context of an input parameter, the value takes precedence over any passed values valueFrom ValueFrom ValueFrom is the source for the output parameter's value","title":"Fields"},{"location":"fields/#templateref","text":"TemplateRef is a reference of template resource. Examples with this field (click to open) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"TemplateRef"},{"location":"fields/#fields_30","text":"Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the template. template string Template is the name of referred template in the resource.","title":"Fields"},{"location":"fields/#prometheus","text":"Prometheus is a prometheus metric to be emitted Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Prometheus"},{"location":"fields/#fields_31","text":"Field Name Field Type Description counter Counter Counter is a counter metric gauge Gauge Gauge is a gauge metric help string Help is a string that describes the metric histogram Histogram Histogram is a histogram metric labels Array< MetricLabel > Labels is a list of metric labels name string Name is the name of the metric when string When is a conditional statement that decides when to emit the metric","title":"Fields"},{"location":"fields/#retryaffinity","text":"RetryAffinity prevents running steps on the same host.","title":"RetryAffinity"},{"location":"fields/#fields_32","text":"Field Name Field Type Description nodeAntiAffinity RetryNodeAntiAffinity No description available","title":"Fields"},{"location":"fields/#backoff","text":"Backoff is a backoff strategy to use within retryStrategy Examples with this field (click to open) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml)","title":"Backoff"},{"location":"fields/#fields_33","text":"Field Name Field Type Description duration string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString Factor is a factor to multiply the base duration after each failed retry maxDuration string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy","title":"Fields"},{"location":"fields/#mutex","text":"Mutex holds Mutex configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"Mutex"},{"location":"fields/#fields_34","text":"Field Name Field Type Description name string name of the mutex namespace string Namespace is the namespace of the mutex, default: [namespace of workflow]","title":"Fields"},{"location":"fields/#semaphoreref","text":"SemaphoreRef is a reference of Semaphore","title":"SemaphoreRef"},{"location":"fields/#fields_35","text":"Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for Semaphore configuration namespace string Namespace is the namespace of the configmap, default: [namespace of workflow]","title":"Fields"},{"location":"fields/#artifactlocation","text":"ArtifactLocation describes a location for a single or multiple artifacts. It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml)","title":"ArtifactLocation"},{"location":"fields/#fields_36","text":"Field Name Field Type Description archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details oss OSSArtifact OSS contains OSS artifact location details raw RawArtifact Raw contains raw artifact location details s3 S3Artifact S3 contains S3 artifact location details","title":"Fields"},{"location":"fields/#containersettemplate","text":"No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml)","title":"ContainerSetTemplate"},{"location":"fields/#fields_37","text":"Field Name Field Type Description containers Array< ContainerNode > No description available retryStrategy ContainerSetRetryStrategy RetryStrategy describes how to retry a container nodes in the container set if it fails. Nbr of retries(default 0) and sleep duration between retries(default 0s, instant retry) can be set. volumeMounts Array< VolumeMount > No description available","title":"Fields"},{"location":"fields/#dagtemplate","text":"DAGTemplate is a template subtype for directed acyclic graph templates Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"DAGTemplate"},{"location":"fields/#fields_38","text":"Field Name Field Type Description failFast boolean This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string Target are one or more names of targets to execute in a DAG tasks Array< DAGTask > Tasks are a list of DAG tasks","title":"Fields"},{"location":"fields/#data","text":"Data is a data template Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml)","title":"Data"},{"location":"fields/#fields_39","text":"Field Name Field Type Description source DataSource Source sources external data into a data template transformation Array< TransformationStep > Transformation applies a set of transformations","title":"Fields"},{"location":"fields/#http","text":"No description available Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTP"},{"location":"fields/#fields_40","text":"Field Name Field Type Description body string Body is content of the HTTP Request bodyFrom HTTPBodySource BodyFrom is content of the HTTP Request as Bytes headers Array< HTTPHeader > Headers are an optional list of headers to send with HTTP requests insecureSkipVerify boolean InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string Method is HTTP methods for HTTP Request successCondition string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds integer TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string URL of the HTTP Request","title":"Fields"},{"location":"fields/#usercontainer","text":"UserContainer is a container specified by a user. Examples with this field (click to open) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml)","title":"UserContainer"},{"location":"fields/#fields_41","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes mirrorVolumeMounts boolean MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#inputs","text":"Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"Inputs"},{"location":"fields/#fields_42","text":"Field Name Field Type Description artifacts Array< Artifact > Artifact are a list of artifacts passed as inputs parameters Array< Parameter > Parameters are a list of parameters passed as inputs","title":"Fields"},{"location":"fields/#memoize","text":"Memoization enables caching for the Outputs of the template Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"Memoize"},{"location":"fields/#fields_43","text":"Field Name Field Type Description cache Cache Cache sets and configures the kind of cache key string Key is the key to use as the caching key maxAge string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored.","title":"Fields"},{"location":"fields/#plugin","text":"Plugin is an Object with exactly one key","title":"Plugin"},{"location":"fields/#resourcetemplate","text":"ResourceTemplate is a template subtype to manipulate kubernetes resources Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml)","title":"ResourceTemplate"},{"location":"fields/#fields_44","text":"Field Name Field Type Description action string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags Array< string > Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom is the source for a single kubernetes manifest mergeStrategy string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step","title":"Fields"},{"location":"fields/#scripttemplate","text":"ScriptTemplate is a template subtype to enable scripting through code steps Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"ScriptTemplate"},{"location":"fields/#fields_45","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ source string Source contains the source code of the script to execute startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#workflowstep","text":"WorkflowStep is a reference to a template to execute in a series of step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"WorkflowStep"},{"location":"fields/#fields_46","text":"Field Name Field Type Description arguments Arguments Arguments hold arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name of the step ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Template is the name of the template to execute as the step templateRef TemplateRef TemplateRef is the reference to the template resource to execute as the step. when string When is an expression in which the step should conditionally execute withItems Array< Item > WithItems expands a step into multiple parallel steps from the items in the list withParam string WithParam expands a step into multiple parallel steps from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a step into a numeric sequence","title":"Fields"},{"location":"fields/#suspendtemplate","text":"SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Examples with this field (click to open) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml)","title":"SuspendTemplate"},{"location":"fields/#fields_47","text":"Field Name Field Type Description duration string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\"","title":"Fields"},{"location":"fields/#labelvaluefrom","text":"No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml)","title":"LabelValueFrom"},{"location":"fields/#fields_48","text":"Field Name Field Type Description expression string No description available","title":"Fields"},{"location":"fields/#artifactrepository","text":"ArtifactRepository represents an artifact repository in which a controller will store its artifacts","title":"ArtifactRepository"},{"location":"fields/#fields_49","text":"Field Name Field Type Description archiveLogs boolean ArchiveLogs enables log archiving artifactory ArtifactoryArtifactRepository Artifactory stores artifacts to JFrog Artifactory azure AzureArtifactRepository Azure stores artifact in an Azure Storage account gcs GCSArtifactRepository GCS stores artifact in a GCS object store hdfs HDFSArtifactRepository HDFS stores artifacts in HDFS oss OSSArtifactRepository OSS stores artifact in a OSS-compliant object store s3 S3ArtifactRepository S3 stores artifact in a S3-compliant object store","title":"Fields"},{"location":"fields/#memoizationstatus","text":"MemoizationStatus is the status of this memoized node","title":"MemoizationStatus"},{"location":"fields/#fields_50","text":"Field Name Field Type Description cacheName string Cache is the name of the cache that was used hit boolean Hit indicates whether this node was created from a cache entry key string Key is the name of the key used for this node's cache","title":"Fields"},{"location":"fields/#nodeflag","text":"No description available","title":"NodeFlag"},{"location":"fields/#fields_51","text":"Field Name Field Type Description hooked boolean Hooked tracks whether or not this node was triggered by hook or onExit retried boolean Retried tracks whether or not this node was retried by retryStrategy","title":"Fields"},{"location":"fields/#nodesynchronizationstatus","text":"NodeSynchronizationStatus stores the status of a node","title":"NodeSynchronizationStatus"},{"location":"fields/#fields_52","text":"Field Name Field Type Description waiting string Waiting is the name of the lock that this node is waiting for","title":"Fields"},{"location":"fields/#mutexstatus","text":"MutexStatus contains which objects hold mutex locks, and which objects this workflow is waiting on to release locks. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"MutexStatus"},{"location":"fields/#fields_53","text":"Field Name Field Type Description holding Array< MutexHolding > Holding is a list of mutexes and their respective objects that are held by mutex lock for this io.argoproj.workflow.v1alpha1. waiting Array< MutexHolding > Waiting is a list of mutexes and their respective objects this workflow is waiting for.","title":"Fields"},{"location":"fields/#semaphorestatus","text":"No description available","title":"SemaphoreStatus"},{"location":"fields/#fields_54","text":"Field Name Field Type Description holding Array< SemaphoreHolding > Holding stores the list of resource acquired synchronization lock for workflows. waiting Array< SemaphoreHolding > Waiting indicates the list of current synchronization lock holders.","title":"Fields"},{"location":"fields/#archivestrategy","text":"ArchiveStrategy describes how to archive files/directory when saving artifacts Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml)","title":"ArchiveStrategy"},{"location":"fields/#fields_55","text":"Field Name Field Type Description none NoneStrategy No description available tar TarStrategy No description available zip ZipStrategy No description available","title":"Fields"},{"location":"fields/#artifactgc","text":"ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml)","title":"ArtifactGC"},{"location":"fields/#fields_56","text":"Field Name Field Type Description podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use.","title":"Fields"},{"location":"fields/#artifactoryartifact","text":"ArtifactoryArtifact is the location of an artifactory artifact Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml)","title":"ArtifactoryArtifact"},{"location":"fields/#fields_57","text":"Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password url string URL of the artifact usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#azureartifact","text":"AzureArtifact is the location of a an Azure Storage artifact Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml)","title":"AzureArtifact"},{"location":"fields/#fields_58","text":"Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blob string Blob is the blob name (i.e., path) in the container where the artifact resides container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#gcsartifact","text":"GCSArtifact is the location of a GCS artifact Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml)","title":"GCSArtifact"},{"location":"fields/#fields_59","text":"Field Name Field Type Description bucket string Bucket is the name of the bucket key string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key","title":"Fields"},{"location":"fields/#gitartifact","text":"GitArtifact is the location of an git artifact Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml)","title":"GitArtifact"},{"location":"fields/#fields_60","text":"Field Name Field Type Description branch string Branch is the branch to fetch when SingleBranch is enabled depth integer Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean DisableSubmodules disables submodules during git clone fetch Array< string > Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repo string Repo is the git repository revision string Revision is the git commit, tag, branch to checkout singleBranch boolean SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SSHPrivateKeySecret is the secret selector to the repository ssh private key usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#hdfsartifact","text":"HDFSArtifact is the location of an HDFS artifact Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml)","title":"HDFSArtifact"},{"location":"fields/#fields_61","text":"Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string Path is a file path in HDFS","title":"Fields"},{"location":"fields/#httpartifact","text":"HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTPArtifact"},{"location":"fields/#fields_62","text":"Field Name Field Type Description auth HTTPAuth Auth contains information for client authentication headers Array< Header > Headers are an optional list of headers to send with HTTP requests for artifacts url string URL of the artifact","title":"Fields"},{"location":"fields/#ossartifact","text":"OSSArtifact is the location of an Alibaba Cloud OSS artifact Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml)","title":"OSSArtifact"},{"location":"fields/#fields_63","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint key string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#rawartifact","text":"RawArtifact allows raw string content to be placed as an artifact in a container Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml)","title":"RawArtifact"},{"location":"fields/#fields_64","text":"Field Name Field Type Description data string Data is the string contents of the artifact","title":"Fields"},{"location":"fields/#s3artifact","text":"S3Artifact is the location of an S3 artifact","title":"S3Artifact"},{"location":"fields/#fields_65","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS key string Key is the key in the bucket where the artifact resides region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#valuefrom","text":"ValueFrom describes a location in which to obtain the value to a parameter Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"ValueFrom"},{"location":"fields/#fields_66","text":"Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for input parameter configuration default string Default specifies a value to be used if retrieving the value from the specified source fails event string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string JQFilter expression against the resource object in resource templates jsonPath string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom Supplied value to be filled in directly, either through the CLI, API, etc.","title":"Fields"},{"location":"fields/#counter","text":"Counter is a Counter prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Counter"},{"location":"fields/#fields_67","text":"Field Name Field Type Description value string Value is the value of the metric","title":"Fields"},{"location":"fields/#gauge","text":"Gauge is a Gauge prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Gauge"},{"location":"fields/#fields_68","text":"Field Name Field Type Description operation string Operation defines the operation to apply with value and the metrics' current value realtime boolean Realtime emits this metric in real time if applicable value string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric","title":"Fields"},{"location":"fields/#histogram","text":"Histogram is a Histogram prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml)","title":"Histogram"},{"location":"fields/#fields_69","text":"Field Name Field Type Description buckets Array< Amount > Buckets is a list of bucket divisors for the histogram value string Value is the value of the metric","title":"Fields"},{"location":"fields/#metriclabel","text":"MetricLabel is a single label for a prometheus metric Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml)","title":"MetricLabel"},{"location":"fields/#fields_70","text":"Field Name Field Type Description key string No description available value string No description available","title":"Fields"},{"location":"fields/#retrynodeantiaffinity","text":"RetryNodeAntiAffinity is a placeholder for future expansion, only empty nodeAntiAffinity is allowed. In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\".","title":"RetryNodeAntiAffinity"},{"location":"fields/#containernode","text":"No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml)","title":"ContainerNode"},{"location":"fields/#fields_71","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell dependencies Array< string > No description available env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#containersetretrystrategy","text":"No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"ContainerSetRetryStrategy"},{"location":"fields/#fields_72","text":"Field Name Field Type Description duration string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString Nbr of retries","title":"Fields"},{"location":"fields/#dagtask","text":"DAGTask represents a node in the graph during DAG execution Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"DAGTask"},{"location":"fields/#fields_73","text":"Field Name Field Type Description arguments Arguments Arguments are the parameter and artifact arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified dependencies Array< string > Dependencies are name of other targets which this depends on depends string Depends are name of other targets which this depends on hooks LifecycleHook Hooks hold the lifecycle hook which is invoked at lifecycle of task, irrespective of the success, failure, or error status of the primary task inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name is the name of the target ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Name of template to execute templateRef TemplateRef TemplateRef is the reference to the template resource to execute. when string When is an expression in which the task should conditionally execute withItems Array< Item > WithItems expands a task into multiple parallel tasks from the items in the list withParam string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a task into a numeric sequence","title":"Fields"},{"location":"fields/#datasource","text":"DataSource sources external data into a data template Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"DataSource"},{"location":"fields/#fields_74","text":"Field Name Field Type Description artifactPaths ArtifactPaths ArtifactPaths is a data transformation that collects a list of artifact paths","title":"Fields"},{"location":"fields/#transformationstep","text":"No description available Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml)","title":"TransformationStep"},{"location":"fields/#fields_75","text":"Field Name Field Type Description expression string Expression defines an expr expression to apply","title":"Fields"},{"location":"fields/#httpbodysource","text":"HTTPBodySource contains the source of the HTTP body.","title":"HTTPBodySource"},{"location":"fields/#fields_76","text":"Field Name Field Type Description bytes byte No description available","title":"Fields"},{"location":"fields/#httpheader","text":"No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTPHeader"},{"location":"fields/#fields_77","text":"Field Name Field Type Description name string No description available value string No description available valueFrom HTTPHeaderSource No description available","title":"Fields"},{"location":"fields/#cache","text":"Cache is the configuration for the type of cache to be used Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"Cache"},{"location":"fields/#fields_78","text":"Field Name Field Type Description configMap ConfigMapKeySelector ConfigMap sets a ConfigMap-based cache","title":"Fields"},{"location":"fields/#manifestfrom","text":"No description available","title":"ManifestFrom"},{"location":"fields/#fields_79","text":"Field Name Field Type Description artifact Artifact Artifact contains the artifact to use","title":"Fields"},{"location":"fields/#continueon","text":"ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both. Examples with this field (click to open) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml)","title":"ContinueOn"},{"location":"fields/#fields_80","text":"Field Name Field Type Description error boolean No description available failed boolean No description available","title":"Fields"},{"location":"fields/#item","text":"Item expands a single workflow step into multiple parallel steps The value of Item can be a map, string, bool, or number Examples with this field (click to open) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml)","title":"Item"},{"location":"fields/#sequence","text":"Sequence expands a workflow step into numeric range Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"Sequence"},{"location":"fields/#fields_81","text":"Field Name Field Type Description count IntOrString Count is number of elements in the sequence (default: 0). Not to be used with end end IntOrString Number at which to end the sequence (default: 0). Not to be used with Count format string Format is a printf format string to format the value in the sequence start IntOrString Number at which to start the sequence (default: 0)","title":"Fields"},{"location":"fields/#artifactoryartifactrepository","text":"ArtifactoryArtifactRepository defines the controller configuration for an artifactory artifact repository Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml)","title":"ArtifactoryArtifactRepository"},{"location":"fields/#fields_82","text":"Field Name Field Type Description keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repoURL string RepoURL is the url for artifactory repo. usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#azureartifactrepository","text":"AzureArtifactRepository defines the controller configuration for an Azure Blob Storage artifact repository Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml)","title":"AzureArtifactRepository"},{"location":"fields/#fields_83","text":"Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blobNameFormat string BlobNameFormat is defines the format of how to store blob names. Can reference workflow variables container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#gcsartifactrepository","text":"GCSArtifactRepository defines the controller configuration for a GCS artifact repository Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml)","title":"GCSArtifactRepository"},{"location":"fields/#fields_84","text":"Field Name Field Type Description bucket string Bucket is the name of the bucket keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key","title":"Fields"},{"location":"fields/#hdfsartifactrepository","text":"HDFSArtifactRepository defines the controller configuration for an HDFS artifact repository Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml)","title":"HDFSArtifactRepository"},{"location":"fields/#fields_85","text":"Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. pathFormat string PathFormat is defines the format of path to store a file. Can reference workflow variables","title":"Fields"},{"location":"fields/#ossartifactrepository","text":"OSSArtifactRepository defines the controller configuration for an OSS artifact repository Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml)","title":"OSSArtifactRepository"},{"location":"fields/#fields_86","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#s3artifactrepository","text":"S3ArtifactRepository defines the controller configuration for an S3 artifact repository","title":"S3ArtifactRepository"},{"location":"fields/#fields_87","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. ~~ keyPrefix ~~ ~~ string ~~ ~~KeyPrefix is prefix used as part of the bucket key in which the controller will store artifacts.~~ DEPRECATED. Use KeyFormat instead region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#mutexholding","text":"MutexHolding describes the mutex and the object which is holding it.","title":"MutexHolding"},{"location":"fields/#fields_88","text":"Field Name Field Type Description holder string Holder is a reference to the object which holds the Mutex. Holding Scenario: 1. Current workflow's NodeID which is holding the lock. e.g: ${NodeID} Waiting Scenario: 1. Current workflow or other workflow NodeID which is holding the lock. e.g: ${WorkflowName}/${NodeID} mutex string Reference for the mutex e.g: ${namespace}/mutex/${mutexName}","title":"Fields"},{"location":"fields/#semaphoreholding","text":"No description available","title":"SemaphoreHolding"},{"location":"fields/#fields_89","text":"Field Name Field Type Description holders Array< string > Holders stores the list of current holder names in the io.argoproj.workflow.v1alpha1. semaphore string Semaphore stores the semaphore name.","title":"Fields"},{"location":"fields/#nonestrategy","text":"NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml)","title":"NoneStrategy"},{"location":"fields/#tarstrategy","text":"TarStrategy will tar and gzip the file or directory when saving Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml)","title":"TarStrategy"},{"location":"fields/#fields_90","text":"Field Name Field Type Description compressionLevel integer CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression.","title":"Fields"},{"location":"fields/#zipstrategy","text":"ZipStrategy will unzip zipped input artifacts","title":"ZipStrategy"},{"location":"fields/#httpauth","text":"No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTPAuth"},{"location":"fields/#fields_91","text":"Field Name Field Type Description basicAuth BasicAuth No description available clientCert ClientCertAuth No description available oauth2 OAuth2Auth No description available","title":"Fields"},{"location":"fields/#header","text":"Header indicate a key-value request header to be used when fetching artifacts over HTTP Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"Header"},{"location":"fields/#fields_92","text":"Field Name Field Type Description name string Name is the header name value string Value is the literal value to use for the header","title":"Fields"},{"location":"fields/#osslifecyclerule","text":"OSSLifecycleRule specifies how to manage bucket's lifecycle","title":"OSSLifecycleRule"},{"location":"fields/#fields_93","text":"Field Name Field Type Description markDeletionAfterDays integer MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays integer MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type","title":"Fields"},{"location":"fields/#creates3bucketoptions","text":"CreateS3BucketOptions options used to determine automatic automatic bucket-creation process","title":"CreateS3BucketOptions"},{"location":"fields/#fields_94","text":"Field Name Field Type Description objectLocking boolean ObjectLocking Enable object locking","title":"Fields"},{"location":"fields/#s3encryptionoptions","text":"S3EncryptionOptions used to determine encryption options during s3 operations","title":"S3EncryptionOptions"},{"location":"fields/#fields_95","text":"Field Name Field Type Description enableEncryption boolean EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector ServerSideCustomerKeySecret tells the driver to encrypt the output artifacts using SSE-C with the specified secret.","title":"Fields"},{"location":"fields/#suppliedvaluefrom","text":"SuppliedValueFrom is a placeholder for a value to be filled in directly, either through the CLI, API, etc. Examples with this field (click to open) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"SuppliedValueFrom"},{"location":"fields/#amount","text":"Amount represent a numeric amount. Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml)","title":"Amount"},{"location":"fields/#artifactpaths","text":"ArtifactPaths expands a step from a collection of artifacts Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml)","title":"ArtifactPaths"},{"location":"fields/#fields_96","text":"Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source","title":"Fields"},{"location":"fields/#httpheadersource","text":"No description available Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"HTTPHeaderSource"},{"location":"fields/#fields_97","text":"Field Name Field Type Description secretKeyRef SecretKeySelector No description available","title":"Fields"},{"location":"fields/#basicauth","text":"BasicAuth describes the secret selectors required for basic authentication","title":"BasicAuth"},{"location":"fields/#fields_98","text":"Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#clientcertauth","text":"ClientCertAuth holds necessary information for client authentication via certificates Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"ClientCertAuth"},{"location":"fields/#fields_99","text":"Field Name Field Type Description clientCertSecret SecretKeySelector No description available clientKeySecret SecretKeySelector No description available","title":"Fields"},{"location":"fields/#oauth2auth","text":"OAuth2Auth holds all information for client authentication via OAuth2 tokens","title":"OAuth2Auth"},{"location":"fields/#fields_100","text":"Field Name Field Type Description clientIDSecret SecretKeySelector No description available clientSecretSecret SecretKeySelector No description available endpointParams Array< OAuth2EndpointParam > No description available scopes Array< string > No description available tokenURLSecret SecretKeySelector No description available","title":"Fields"},{"location":"fields/#oauth2endpointparam","text":"EndpointParam is for requesting optional fields that should be sent in the oauth request Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"OAuth2EndpointParam"},{"location":"fields/#fields_101","text":"Field Name Field Type Description key string Name is the header name value string Value is the literal value to use for the header","title":"Fields"},{"location":"fields/#external-fields","text":"","title":"External Fields"},{"location":"fields/#objectmeta","text":"ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"ObjectMeta"},{"location":"fields/#fields_102","text":"Field Name Field Type Description annotations Map< string , string > Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations clusterName string The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request. creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers Array< string > Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels Map< string , string > Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels managedFields Array< ManagedFieldsEntry > ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences Array< OwnerReference > List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency ~~ selfLink ~~ ~~ string ~~ ~~SelfLink is a URL representing this object. Populated by the system. Read-only.~~ DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids","title":"Fields"},{"location":"fields/#affinity","text":"Affinity is a group of affinity scheduling rules.","title":"Affinity"},{"location":"fields/#fields_103","text":"Field Name Field Type Description nodeAffinity NodeAffinity Describes node affinity scheduling rules for the pod. podAffinity PodAffinity Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity PodAntiAffinity Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).","title":"Fields"},{"location":"fields/#poddnsconfig","text":"PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml)","title":"PodDNSConfig"},{"location":"fields/#fields_104","text":"Field Name Field Type Description nameservers Array< string > A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options Array< PodDNSConfigOption > A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. searches Array< string > A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed.","title":"Fields"},{"location":"fields/#hostalias","text":"HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file.","title":"HostAlias"},{"location":"fields/#fields_105","text":"Field Name Field Type Description hostnames Array< string > Hostnames for the above IP address. ip string IP address of the host file entry.","title":"Fields"},{"location":"fields/#localobjectreference","text":"LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Examples with this field (click to open) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml)","title":"LocalObjectReference"},{"location":"fields/#fields_106","text":"Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names","title":"Fields"},{"location":"fields/#poddisruptionbudgetspec","text":"PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Examples with this field (click to open) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml)","title":"PodDisruptionBudgetSpec"},{"location":"fields/#fields_107","text":"Field Name Field Type Description maxUnavailable IntOrString An eviction is allowed if at most \"maxUnavailable\" pods selected by \"selector\" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with \"minAvailable\". minAvailable IntOrString An eviction is allowed if at least \"minAvailable\" pods selected by \"selector\" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying \"100%\". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.","title":"Fields"},{"location":"fields/#podsecuritycontext","text":"PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml)","title":"PodSecurityContext"},{"location":"fields/#fields_108","text":"Field Name Field Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are \"OnRootMismatch\" and \"Always\". If not specified, \"Always\" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups Array< integer > A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls Array< Sysctl > Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.","title":"Fields"},{"location":"fields/#toleration","text":"The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .","title":"Toleration"},{"location":"fields/#fields_109","text":"Field Name Field Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - \"NoExecute\" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - \"NoSchedule\" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - \"PreferNoSchedule\" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - \"Equal\" - \"Exists\" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.","title":"Fields"},{"location":"fields/#persistentvolumeclaim","text":"PersistentVolumeClaim is a user's request for and claim to a persistent volume Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"PersistentVolumeClaim"},{"location":"fields/#fields_110","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PersistentVolumeClaimSpec Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status PersistentVolumeClaimStatus Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims","title":"Fields"},{"location":"fields/#volume","text":"Volume represents a named volume in a pod that may be accessed by any container in the pod. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml)","title":"Volume"},{"location":"fields/#fields_111","text":"Field Name Field Type Description awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFileVolumeSource AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs CephFSVolumeSource CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderVolumeSource Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap ConfigMapVolumeSource ConfigMap represents a configMap that should populate this volume csi CSIVolumeSource CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI DownwardAPIVolumeSource DownwardAPI represents downward API about the pod that should populate this volume emptyDir EmptyDirVolumeSource EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral EphemeralVolumeSource Ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc FCVolumeSource FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexVolumeSource FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk ~~ gitRepo ~~ ~~ GitRepoVolumeSource ~~ ~~GitRepo represents a git repository at a particular revision.~~ DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs GlusterfsVolumeSource Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIVolumeSource ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected ProjectedVolumeSource Items for all in one resources secrets, configmaps, and downward API quobyte QuobyteVolumeSource Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDVolumeSource RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOVolumeSource ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret SecretVolumeSource Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos StorageOSVolumeSource StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume VsphereVirtualDiskVolumeSource VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine","title":"Fields"},{"location":"fields/#time","text":"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.","title":"Time"},{"location":"fields/#objectreference","text":"ObjectReference contains enough information to let you inspect or modify the referred object.","title":"ObjectReference"},{"location":"fields/#fields_112","text":"Field Name Field Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids","title":"Fields"},{"location":"fields/#duration","text":"Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml)","title":"Duration"},{"location":"fields/#fields_113","text":"Field Name Field Type Description duration string No description available","title":"Fields"},{"location":"fields/#labelselector","text":"A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml)","title":"LabelSelector"},{"location":"fields/#fields_114","text":"Field Name Field Type Description matchExpressions Array< LabelSelectorRequirement > matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels Map< string , string > matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed.","title":"Fields"},{"location":"fields/#intorstring","text":"No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"IntOrString"},{"location":"fields/#container","text":"A single application container that you want to run within a pod. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml)","title":"Container"},{"location":"fields/#fields_115","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The docker image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - \"Always\" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - \"IfNotPresent\" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - \"Never\" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - \"FallbackToLogsOnError\" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - \"File\" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#configmapkeyselector","text":"Selects a key from a ConfigMap. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml)","title":"ConfigMapKeySelector"},{"location":"fields/#fields_116","text":"Field Name Field Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined","title":"Fields"},{"location":"fields/#volumemount","text":"VolumeMount describes a mounting of a Volume within a container. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"VolumeMount"},{"location":"fields/#fields_117","text":"Field Name Field Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive.","title":"Fields"},{"location":"fields/#envvar","text":"EnvVar represents an environment variable present in a Container. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml)","title":"EnvVar"},{"location":"fields/#fields_118","text":"Field Name Field Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty.","title":"Fields"},{"location":"fields/#envfromsource","text":"EnvFromSource represents the source of a set of ConfigMaps","title":"EnvFromSource"},{"location":"fields/#fields_119","text":"Field Name Field Type Description configMapRef ConfigMapEnvSource The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef SecretEnvSource The Secret to select from","title":"Fields"},{"location":"fields/#lifecycle","text":"Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.","title":"Lifecycle"},{"location":"fields/#fields_120","text":"Field Name Field Type Description postStart LifecycleHandler PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop LifecycleHandler PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks","title":"Fields"},{"location":"fields/#probe","text":"Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.","title":"Probe"},{"location":"fields/#fields_121","text":"Field Name Field Type Description exec ExecAction Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc GRPCAction GRPC specifies an action involving a GRPC port. This is an alpha field and requires enabling GRPCContainerProbe feature gate. httpGet HTTPGetAction HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket TCPSocketAction TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes","title":"Fields"},{"location":"fields/#containerport","text":"ContainerPort represents a network port in a single container.","title":"ContainerPort"},{"location":"fields/#fields_122","text":"Field Name Field Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to \"TCP\". Possible enum values: - \"SCTP\" is the SCTP protocol. - \"TCP\" is the TCP protocol. - \"UDP\" is the UDP protocol.","title":"Fields"},{"location":"fields/#resourcerequirements","text":"ResourceRequirements describes the compute resource requirements. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"ResourceRequirements"},{"location":"fields/#fields_123","text":"Field Name Field Type Description limits Quantity Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests Quantity Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/","title":"Fields"},{"location":"fields/#securitycontext","text":"SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml)","title":"SecurityContext"},{"location":"fields/#fields_124","text":"Field Name Field Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities Capabilities The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.","title":"Fields"},{"location":"fields/#volumedevice","text":"volumeDevice describes a mapping of a raw block device within a container.","title":"VolumeDevice"},{"location":"fields/#fields_125","text":"Field Name Field Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod","title":"Fields"},{"location":"fields/#secretkeyselector","text":"SecretKeySelector selects a key of a Secret. Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml)","title":"SecretKeySelector"},{"location":"fields/#fields_126","text":"Field Name Field Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined","title":"Fields"},{"location":"fields/#managedfieldsentry","text":"ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.","title":"ManagedFieldsEntry"},{"location":"fields/#fields_127","text":"Field Name Field Type Description apiVersion string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type. manager string Manager is an identifier of the workflow managing these fields. operation string Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'. subresource string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply'","title":"Fields"},{"location":"fields/#ownerreference","text":"OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.","title":"OwnerReference"},{"location":"fields/#fields_128","text":"Field Name Field Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids","title":"Fields"},{"location":"fields/#nodeaffinity","text":"Node affinity is a group of node affinity scheduling rules.","title":"NodeAffinity"},{"location":"fields/#fields_129","text":"Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< PreferredSchedulingTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution NodeSelector If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.","title":"Fields"},{"location":"fields/#podaffinity","text":"Pod affinity is a group of inter pod affinity scheduling rules.","title":"PodAffinity"},{"location":"fields/#fields_130","text":"Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.","title":"Fields"},{"location":"fields/#podantiaffinity","text":"Pod anti affinity is a group of inter pod anti affinity scheduling rules.","title":"PodAntiAffinity"},{"location":"fields/#fields_131","text":"Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.","title":"Fields"},{"location":"fields/#poddnsconfigoption","text":"PodDNSConfigOption defines DNS resolver options of a pod. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml)","title":"PodDNSConfigOption"},{"location":"fields/#fields_132","text":"Field Name Field Type Description name string Required. value string No description available","title":"Fields"},{"location":"fields/#selinuxoptions","text":"SELinuxOptions are the labels to be applied to the container","title":"SELinuxOptions"},{"location":"fields/#fields_133","text":"Field Name Field Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container.","title":"Fields"},{"location":"fields/#seccompprofile","text":"SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set.","title":"SeccompProfile"},{"location":"fields/#fields_134","text":"Field Name Field Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - \"Localhost\" indicates a profile defined in a file on the node should be used. The file's location relative to /seccomp. - \"RuntimeDefault\" represents the default container runtime seccomp profile. - \"Unconfined\" indicates no seccomp profile is applied (A.K.A. unconfined).","title":"Fields"},{"location":"fields/#sysctl","text":"Sysctl defines a kernel parameter to be set","title":"Sysctl"},{"location":"fields/#fields_135","text":"Field Name Field Type Description name string Name of a property to set value string Value of a property to set","title":"Fields"},{"location":"fields/#windowssecuritycontextoptions","text":"WindowsSecurityContextOptions contain Windows-specific options and credentials.","title":"WindowsSecurityContextOptions"},{"location":"fields/#fields_136","text":"Field Name Field Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.","title":"Fields"},{"location":"fields/#persistentvolumeclaimspec","text":"PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"PersistentVolumeClaimSpec"},{"location":"fields/#fields_137","text":"Field Name Field Type Description accessModes Array< string > AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource TypedLocalObjectReference This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef TypedLocalObjectReference Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources ResourceRequirements Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector LabelSelector A label query over volumes to consider for binding. storageClassName string Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string VolumeName is the binding reference to the PersistentVolume backing this claim.","title":"Fields"},{"location":"fields/#persistentvolumeclaimstatus","text":"PersistentVolumeClaimStatus is the current status of a persistent volume claim.","title":"PersistentVolumeClaimStatus"},{"location":"fields/#fields_138","text":"Field Name Field Type Description accessModes Array< string > AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources Quantity The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity Quantity Represents the actual resources of the underlying volume. conditions Array< PersistentVolumeClaimCondition > Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. phase string Phase represents the current phase of PersistentVolumeClaim. Possible enum values: - \"Bound\" used for PersistentVolumeClaims that are bound - \"Lost\" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - \"Pending\" used for PersistentVolumeClaims that are not yet bound resizeStatus string ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.","title":"Fields"},{"location":"fields/#awselasticblockstorevolumesource","text":"Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling.","title":"AWSElasticBlockStoreVolumeSource"},{"location":"fields/#fields_139","text":"Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). readOnly boolean Specify \"true\" to force and set the ReadOnly property in VolumeMounts to \"true\". If omitted, the default is \"false\". More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string Unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore","title":"Fields"},{"location":"fields/#azurediskvolumesource","text":"AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.","title":"AzureDiskVolumeSource"},{"location":"fields/#fields_140","text":"Field Name Field Type Description cachingMode string Host Caching mode: None, Read Only, Read Write. diskName string The Name of the data disk in the blob storage diskURI string The URI the data disk in the blob storage fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. kind string Expected values Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.","title":"Fields"},{"location":"fields/#azurefilevolumesource","text":"AzureFile represents an Azure File Service mount on the host and bind mount to the pod.","title":"AzureFileVolumeSource"},{"location":"fields/#fields_141","text":"Field Name Field Type Description readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string the name of secret that contains Azure Storage Account Name and Key shareName string Share Name","title":"Fields"},{"location":"fields/#cephfsvolumesource","text":"Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling.","title":"CephFSVolumeSource"},{"location":"fields/#fields_142","text":"Field Name Field Type Description monitors Array< string > Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef LocalObjectReference Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it","title":"Fields"},{"location":"fields/#cindervolumesource","text":"Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.","title":"CinderVolumeSource"},{"location":"fields/#fields_143","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef LocalObjectReference Optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volume id used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md","title":"Fields"},{"location":"fields/#configmapvolumesource","text":"Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"ConfigMapVolumeSource"},{"location":"fields/#fields_144","text":"Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined","title":"Fields"},{"location":"fields/#csivolumesource","text":"Represents a source location of a volume to mount, managed by an external CSI driver","title":"CSIVolumeSource"},{"location":"fields/#fields_145","text":"Field Name Field Type Description driver string Driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string Filesystem type to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference NodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean Specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes Map< string , string > VolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values.","title":"Fields"},{"location":"fields/#downwardapivolumesource","text":"DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling.","title":"DownwardAPIVolumeSource"},{"location":"fields/#fields_146","text":"Field Name Field Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< DownwardAPIVolumeFile > Items is a list of downward API volume file","title":"Fields"},{"location":"fields/#emptydirvolumesource","text":"Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml)","title":"EmptyDirVolumeSource"},{"location":"fields/#fields_147","text":"Field Name Field Type Description medium string What type of storage medium should back this directory. The default is \"\" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir","title":"Fields"},{"location":"fields/#ephemeralvolumesource","text":"Represents an ephemeral volume that is handled by a normal storage driver.","title":"EphemeralVolumeSource"},{"location":"fields/#fields_148","text":"Field Name Field Type Description volumeClaimTemplate PersistentVolumeClaimTemplate Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be - where is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil.","title":"Fields"},{"location":"fields/#fcvolumesource","text":"Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling.","title":"FCVolumeSource"},{"location":"fields/#fields_149","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. lun integer Optional: FC target lun number readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs Array< string > Optional: FC target worldwide names (WWNs) wwids Array< string > Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.","title":"Fields"},{"location":"fields/#flexvolumesource","text":"FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.","title":"FlexVolumeSource"},{"location":"fields/#fields_150","text":"Field Name Field Type Description driver string Driver is the name of the driver to use for this volume. fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. options Map< string , string > Optional: Extra command options if any. readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.","title":"Fields"},{"location":"fields/#flockervolumesource","text":"Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling.","title":"FlockerVolumeSource"},{"location":"fields/#fields_151","text":"Field Name Field Type Description datasetName string Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated datasetUUID string UUID of the dataset. This is unique identifier of a Flocker dataset","title":"Fields"},{"location":"fields/#gcepersistentdiskvolumesource","text":"Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling.","title":"GCEPersistentDiskVolumeSource"},{"location":"fields/#fields_152","text":"Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string Unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk","title":"Fields"},{"location":"fields/#gitrepovolumesource","text":"Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.","title":"GitRepoVolumeSource"},{"location":"fields/#fields_153","text":"Field Name Field Type Description directory string Target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string Repository URL revision string Commit hash for the specified revision.","title":"Fields"},{"location":"fields/#glusterfsvolumesource","text":"Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling.","title":"GlusterfsVolumeSource"},{"location":"fields/#fields_154","text":"Field Name Field Type Description endpoints string EndpointsName is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string Path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean ReadOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod","title":"Fields"},{"location":"fields/#hostpathvolumesource","text":"Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling.","title":"HostPathVolumeSource"},{"location":"fields/#fields_155","text":"Field Name Field Type Description path string Path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string Type for HostPath Volume Defaults to \"\" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath","title":"Fields"},{"location":"fields/#iscsivolumesource","text":"Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling.","title":"ISCSIVolumeSource"},{"location":"fields/#fields_156","text":"Field Name Field Type Description chapAuthDiscovery boolean whether support iSCSI Discovery CHAP authentication chapAuthSession boolean whether support iSCSI Session CHAP authentication fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string Custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. iqn string Target iSCSI Qualified Name. iscsiInterface string iSCSI Interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer iSCSI Target Lun number. portals Array< string > iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef LocalObjectReference CHAP Secret for iSCSI target and initiator authentication targetPortal string iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).","title":"Fields"},{"location":"fields/#nfsvolumesource","text":"Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling.","title":"NFSVolumeSource"},{"location":"fields/#fields_157","text":"Field Name Field Type Description path string Path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean ReadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string Server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs","title":"Fields"},{"location":"fields/#persistentvolumeclaimvolumesource","text":"PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Examples with this field (click to open) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml)","title":"PersistentVolumeClaimVolumeSource"},{"location":"fields/#fields_158","text":"Field Name Field Type Description claimName string ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean Will force the ReadOnly setting in VolumeMounts. Default false.","title":"Fields"},{"location":"fields/#photonpersistentdiskvolumesource","text":"Represents a Photon Controller persistent disk resource.","title":"PhotonPersistentDiskVolumeSource"},{"location":"fields/#fields_159","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string ID that identifies Photon Controller persistent disk","title":"Fields"},{"location":"fields/#portworxvolumesource","text":"PortworxVolumeSource represents a Portworx volume resource.","title":"PortworxVolumeSource"},{"location":"fields/#fields_160","text":"Field Name Field Type Description fsType string FSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string VolumeID uniquely identifies a Portworx volume","title":"Fields"},{"location":"fields/#projectedvolumesource","text":"Represents a projected volume source","title":"ProjectedVolumeSource"},{"location":"fields/#fields_161","text":"Field Name Field Type Description defaultMode integer Mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources Array< VolumeProjection > list of volume projections","title":"Fields"},{"location":"fields/#quobytevolumesource","text":"Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling.","title":"QuobyteVolumeSource"},{"location":"fields/#fields_162","text":"Field Name Field Type Description group string Group to map volume access to Default is no group readOnly boolean ReadOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string Registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string Tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string User to map volume access to Defaults to serivceaccount user volume string Volume is a string that references an already created Quobyte volume by name.","title":"Fields"},{"location":"fields/#rbdvolumesource","text":"Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.","title":"RBDVolumeSource"},{"location":"fields/#fields_163","text":"Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string The rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string Keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors Array< string > A collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string The rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef LocalObjectReference SecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string The rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it","title":"Fields"},{"location":"fields/#scaleiovolumesource","text":"ScaleIOVolumeSource represents a persistent ScaleIO volume","title":"ScaleIOVolumeSource"},{"location":"fields/#fields_164","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". gateway string The host address of the ScaleIO API Gateway. protectionDomain string The name of the ScaleIO Protection Domain for the configured storage. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean Flag to enable/disable SSL communication with Gateway, default false storageMode string Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string The ScaleIO Storage Pool associated with the protection domain. system string The name of the storage system as configured in ScaleIO. volumeName string The name of a volume already created in the ScaleIO system that is associated with this volume source.","title":"Fields"},{"location":"fields/#secretvolumesource","text":"Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml)","title":"SecretVolumeSource"},{"location":"fields/#fields_165","text":"Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean Specify whether the Secret or its keys must be defined secretName string Name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret","title":"Fields"},{"location":"fields/#storageosvolumesource","text":"Represents a StorageOS persistent volume resource.","title":"StorageOSVolumeSource"},{"location":"fields/#fields_166","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string VolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string VolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.","title":"Fields"},{"location":"fields/#vspherevirtualdiskvolumesource","text":"Represents a vSphere volume resource.","title":"VsphereVirtualDiskVolumeSource"},{"location":"fields/#fields_167","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. storagePolicyID string Storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string Storage Policy Based Management (SPBM) profile name. volumePath string Path that identifies vSphere volume vmdk","title":"Fields"},{"location":"fields/#labelselectorrequirement","text":"A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.","title":"LabelSelectorRequirement"},{"location":"fields/#fields_168","text":"Field Name Field Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values Array< string > values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.","title":"Fields"},{"location":"fields/#envvarsource","text":"EnvVarSource represents a source for the value of an EnvVar. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"EnvVarSource"},{"location":"fields/#fields_169","text":"Field Name Field Type Description configMapKeyRef ConfigMapKeySelector Selects a key of a ConfigMap. fieldRef ObjectFieldSelector Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels[''] , metadata.annotations[''] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef SecretKeySelector Selects a key of a secret in the pod's namespace","title":"Fields"},{"location":"fields/#configmapenvsource","text":"ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.","title":"ConfigMapEnvSource"},{"location":"fields/#fields_170","text":"Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined","title":"Fields"},{"location":"fields/#secretenvsource","text":"SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables.","title":"SecretEnvSource"},{"location":"fields/#fields_171","text":"Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined","title":"Fields"},{"location":"fields/#lifecyclehandler","text":"LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified.","title":"LifecycleHandler"},{"location":"fields/#fields_172","text":"Field Name Field Type Description exec ExecAction Exec specifies the action to take. httpGet HTTPGetAction HTTPGet specifies the http request to perform. tcpSocket TCPSocketAction Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.","title":"Fields"},{"location":"fields/#execaction","text":"ExecAction describes a \"run in container\" action. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml)","title":"ExecAction"},{"location":"fields/#fields_173","text":"Field Name Field Type Description command Array< string > Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('","title":"Fields"},{"location":"fields/#grpcaction","text":"No description available","title":"GRPCAction"},{"location":"fields/#fields_174","text":"Field Name Field Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC.","title":"Fields"},{"location":"fields/#httpgetaction","text":"HTTPGetAction describes an action based on HTTP Get requests. Examples with this field (click to open) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml)","title":"HTTPGetAction"},{"location":"fields/#fields_175","text":"Field Name Field Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. httpHeaders Array< HTTPHeader > Custom headers to set in the request. HTTP allows repeated headers. path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - \"HTTP\" means that the scheme used will be http:// - \"HTTPS\" means that the scheme used will be https://","title":"Fields"},{"location":"fields/#tcpsocketaction","text":"TCPSocketAction describes an action based on opening a socket","title":"TCPSocketAction"},{"location":"fields/#fields_176","text":"Field Name Field Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.","title":"Fields"},{"location":"fields/#quantity","text":"Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml)","title":"Quantity"},{"location":"fields/#capabilities","text":"Adds and removes POSIX capabilities from running containers.","title":"Capabilities"},{"location":"fields/#fields_177","text":"Field Name Field Type Description add Array< string > Added capabilities drop Array< string > Removed capabilities","title":"Fields"},{"location":"fields/#fieldsv1","text":"FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format. Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff","title":"FieldsV1"},{"location":"fields/#preferredschedulingterm","text":"An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).","title":"PreferredSchedulingTerm"},{"location":"fields/#fields_178","text":"Field Name Field Type Description preference NodeSelectorTerm A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.","title":"Fields"},{"location":"fields/#nodeselector","text":"A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.","title":"NodeSelector"},{"location":"fields/#fields_179","text":"Field Name Field Type Description nodeSelectorTerms Array< NodeSelectorTerm > Required. A list of node selector terms. The terms are ORed.","title":"Fields"},{"location":"fields/#weightedpodaffinityterm","text":"The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)","title":"WeightedPodAffinityTerm"},{"location":"fields/#fields_180","text":"Field Name Field Type Description podAffinityTerm PodAffinityTerm Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100.","title":"Fields"},{"location":"fields/#podaffinityterm","text":"Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running","title":"PodAffinityTerm"},{"location":"fields/#fields_181","text":"Field Name Field Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces. This field is beta-level and is only honored when PodAffinityNamespaceSelector feature is enabled. namespaces Array< string > namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\" topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.","title":"Fields"},{"location":"fields/#typedlocalobjectreference","text":"TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace.","title":"TypedLocalObjectReference"},{"location":"fields/#fields_182","text":"Field Name Field Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced","title":"Fields"},{"location":"fields/#persistentvolumeclaimcondition","text":"PersistentVolumeClaimCondition contails details about state of pvc","title":"PersistentVolumeClaimCondition"},{"location":"fields/#fields_183","text":"Field Name Field Type Description lastProbeTime Time Last time we probed the condition. lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports \"ResizeStarted\" that means the underlying persistent volume is being resized. status string No description available type string Possible enum values: - \"FileSystemResizePending\" - controller resize is finished and a file system resize is pending on node - \"Resizing\" - a user trigger resize of pvc has been started","title":"Fields"},{"location":"fields/#keytopath","text":"Maps a string key to a path within a volume.","title":"KeyToPath"},{"location":"fields/#fields_184","text":"Field Name Field Type Description key string The key to project. mode integer Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string The relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.","title":"Fields"},{"location":"fields/#downwardapivolumefile","text":"DownwardAPIVolumeFile represents information to create the file containing the pod field","title":"DownwardAPIVolumeFile"},{"location":"fields/#fields_185","text":"Field Name Field Type Description fieldRef ObjectFieldSelector Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.","title":"Fields"},{"location":"fields/#persistentvolumeclaimtemplate","text":"PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource.","title":"PersistentVolumeClaimTemplate"},{"location":"fields/#fields_186","text":"Field Name Field Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec PersistentVolumeClaimSpec The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.","title":"Fields"},{"location":"fields/#volumeprojection","text":"Projection that may be projected along with other supported volume types","title":"VolumeProjection"},{"location":"fields/#fields_187","text":"Field Name Field Type Description configMap ConfigMapProjection information about the configMap data to project downwardAPI DownwardAPIProjection information about the downwardAPI data to project secret SecretProjection information about the secret data to project serviceAccountToken ServiceAccountTokenProjection information about the serviceAccountToken data to project","title":"Fields"},{"location":"fields/#objectfieldselector","text":"ObjectFieldSelector selects an APIVersioned field of an object.","title":"ObjectFieldSelector"},{"location":"fields/#fields_188","text":"Field Name Field Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". fieldPath string Path of the field to select in the specified API version.","title":"Fields"},{"location":"fields/#resourcefieldselector","text":"ResourceFieldSelector represents container resources (cpu, memory) and their output format","title":"ResourceFieldSelector"},{"location":"fields/#fields_189","text":"Field Name Field Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to \"1\" resource string Required: resource to select","title":"Fields"},{"location":"fields/#httpheader_1","text":"HTTPHeader describes a custom header to be used in HTTP probes","title":"HTTPHeader"},{"location":"fields/#fields_190","text":"Field Name Field Type Description name string The header field name value string The header field value","title":"Fields"},{"location":"fields/#nodeselectorterm","text":"A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.","title":"NodeSelectorTerm"},{"location":"fields/#fields_191","text":"Field Name Field Type Description matchExpressions Array< NodeSelectorRequirement > A list of node selector requirements by node's labels. matchFields Array< NodeSelectorRequirement > A list of node selector requirements by node's fields.","title":"Fields"},{"location":"fields/#configmapprojection","text":"Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"ConfigMapProjection"},{"location":"fields/#fields_192","text":"Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined","title":"Fields"},{"location":"fields/#downwardapiprojection","text":"Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode.","title":"DownwardAPIProjection"},{"location":"fields/#fields_193","text":"Field Name Field Type Description items Array< DownwardAPIVolumeFile > Items is a list of DownwardAPIVolume file","title":"Fields"},{"location":"fields/#secretprojection","text":"Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml)","title":"SecretProjection"},{"location":"fields/#fields_194","text":"Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined","title":"Fields"},{"location":"fields/#serviceaccounttokenprojection","text":"ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise).","title":"ServiceAccountTokenProjection"},{"location":"fields/#fields_195","text":"Field Name Field Type Description audience string Audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string Path is the path relative to the mount point of the file to project the token into.","title":"Fields"},{"location":"fields/#nodeselectorrequirement","text":"A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.","title":"NodeSelectorRequirement"},{"location":"fields/#fields_196","text":"Field Name Field Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - \"DoesNotExist\" - \"Exists\" - \"Gt\" - \"In\" - \"Lt\" - \"NotIn\" values Array< string > An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.","title":"Fields"},{"location":"high-availability/","text":"High-Availability (HA) \u00b6 Workflow Controller \u00b6 Before v3.0, only one controller could run at once. (If it crashed, Kubernetes would start another pod.) v3.0 For many users, a short loss of workflow service may be acceptable - the new controller will just continue running workflows if it restarts. However, with high service guarantees, new pods may take too long to start running workflows. You should run two replicas, and one of which will be kept on hot-standby. A voluntary pod disruption can cause both replicas to be replaced at the same time. You should use a Pod Disruption Budget to prevent this and Pod Priority to recover faster from an involuntary pod disruption: Pod Disruption Budget Pod Priority Argo Server \u00b6 v2.6 Run a minimum of two replicas, typically three, should be run, otherwise it may be possible that API and webhook requests are dropped. Tip Consider using multi AZ-deployment using pod anti-affinity .","title":"High-Availability (HA)"},{"location":"high-availability/#high-availability-ha","text":"","title":"High-Availability (HA)"},{"location":"high-availability/#workflow-controller","text":"Before v3.0, only one controller could run at once. (If it crashed, Kubernetes would start another pod.) v3.0 For many users, a short loss of workflow service may be acceptable - the new controller will just continue running workflows if it restarts. However, with high service guarantees, new pods may take too long to start running workflows. You should run two replicas, and one of which will be kept on hot-standby. A voluntary pod disruption can cause both replicas to be replaced at the same time. You should use a Pod Disruption Budget to prevent this and Pod Priority to recover faster from an involuntary pod disruption: Pod Disruption Budget Pod Priority","title":"Workflow Controller"},{"location":"high-availability/#argo-server","text":"v2.6 Run a minimum of two replicas, typically three, should be run, otherwise it may be possible that API and webhook requests are dropped. Tip Consider using multi AZ-deployment using pod anti-affinity .","title":"Argo Server"},{"location":"http-template/","text":"HTTP Template \u00b6 v3.2 and after HTTP Template is a type of template which can execute HTTP Requests. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : http-template- spec : entrypoint : main templates : - name : main steps : - - name : get-google-homepage template : http arguments : parameters : [{ name : url , value : \"https://www.google.com\" }] - name : http inputs : parameters : - name : url http : timeoutSeconds : 20 # Default 30 url : \"{{inputs.parameters.url}}\" method : \"GET\" # Default GET headers : - name : \"x-header-name\" value : \"test-value\" # Template will succeed if evaluated to true, otherwise will fail # Available variables: # request.body: string, the request body # request.headers: map[string][]string, the request headers # response.url: string, the request url # response.method: string, the request method # response.statusCode: int, the response status code # response.body: string, the response body # response.headers: map[string][]string, the response headers successCondition : \"response.body contains \\\"google\\\"\" # available since v3.3 body : \"test body\" # Change request body Argo Agent \u00b6 HTTP Templates use the Argo Agent, which executes the requests independently of the controller. The Agent and the Workflow Controller communicate through the WorkflowTaskSet CRD, which is created for each running Workflow that requires the use of the Agent . In order to use the Argo Agent, you will need to ensure that you have added the appropriate workflow RBAC to add an agent role with to Argo Workflows. An example agent role can be found in the quick-start manifests .","title":"HTTP Template"},{"location":"http-template/#http-template","text":"v3.2 and after HTTP Template is a type of template which can execute HTTP Requests. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : http-template- spec : entrypoint : main templates : - name : main steps : - - name : get-google-homepage template : http arguments : parameters : [{ name : url , value : \"https://www.google.com\" }] - name : http inputs : parameters : - name : url http : timeoutSeconds : 20 # Default 30 url : \"{{inputs.parameters.url}}\" method : \"GET\" # Default GET headers : - name : \"x-header-name\" value : \"test-value\" # Template will succeed if evaluated to true, otherwise will fail # Available variables: # request.body: string, the request body # request.headers: map[string][]string, the request headers # response.url: string, the request url # response.method: string, the request method # response.statusCode: int, the response status code # response.body: string, the response body # response.headers: map[string][]string, the response headers successCondition : \"response.body contains \\\"google\\\"\" # available since v3.3 body : \"test body\" # Change request body","title":"HTTP Template"},{"location":"http-template/#argo-agent","text":"HTTP Templates use the Argo Agent, which executes the requests independently of the controller. The Agent and the Workflow Controller communicate through the WorkflowTaskSet CRD, which is created for each running Workflow that requires the use of the Agent . In order to use the Argo Agent, you will need to ensure that you have added the appropriate workflow RBAC to add an agent role with to Argo Workflows. An example agent role can be found in the quick-start manifests .","title":"Argo Agent"},{"location":"ide-setup/","text":"IDE Set-Up \u00b6 Validating Argo YAML against the JSON Schema \u00b6 Argo provides a JSON Schema that enables validation of YAML resources in your IDE. JetBrains IDEs (Community & Ultimate Editions) \u00b6 YAML validation is supported natively in IDEA. Configure your IDE to reference the Argo schema and map it to your Argo YAML files: The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that you may need to restart IDEA to pick up the changes. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete. JetBrains IDEs (Community & Ultimate Editions) + Kubernetes Plugin \u00b6 If you have the JetBrains Kubernetes Plugin installed in your IDE, the validation can be configured in the Kubernetes plugin settings instead of using the internal JSON schema file validator. Unlike the previous JSON schema validation method, the plugin detects the necessary validation based on Kubernetes resource definition keys and does not require a file glob pattern. Like the previously described method: The schema is located here . Note that you may need to restart IDEA to pick up the changes. VSCode \u00b6 The Red Hat YAML plugin will provide error highlighting and auto-completion for Argo resources. Install the Red Hat YAML plugin in VSCode and open extension settings: Open the YAML schema settings: Add the Argo schema setting yaml.schemas : The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that other defined schema with overlapping glob patterns may cause errors. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.","title":"IDE Set-Up"},{"location":"ide-setup/#ide-set-up","text":"","title":"IDE Set-Up"},{"location":"ide-setup/#validating-argo-yaml-against-the-json-schema","text":"Argo provides a JSON Schema that enables validation of YAML resources in your IDE.","title":"Validating Argo YAML against the JSON Schema"},{"location":"ide-setup/#jetbrains-ides-community-ultimate-editions","text":"YAML validation is supported natively in IDEA. Configure your IDE to reference the Argo schema and map it to your Argo YAML files: The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that you may need to restart IDEA to pick up the changes. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.","title":"JetBrains IDEs (Community & Ultimate Editions)"},{"location":"ide-setup/#jetbrains-ides-community-ultimate-editions-kubernetes-plugin","text":"If you have the JetBrains Kubernetes Plugin installed in your IDE, the validation can be configured in the Kubernetes plugin settings instead of using the internal JSON schema file validator. Unlike the previous JSON schema validation method, the plugin detects the necessary validation based on Kubernetes resource definition keys and does not require a file glob pattern. Like the previously described method: The schema is located here . Note that you may need to restart IDEA to pick up the changes.","title":"JetBrains IDEs (Community & Ultimate Editions) + Kubernetes Plugin"},{"location":"ide-setup/#vscode","text":"The Red Hat YAML plugin will provide error highlighting and auto-completion for Argo resources. Install the Red Hat YAML plugin in VSCode and open extension settings: Open the YAML schema settings: Add the Argo schema setting yaml.schemas : The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that other defined schema with overlapping glob patterns may cause errors. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.","title":"VSCode"},{"location":"inline-templates/","text":"Inline Templates \u00b6 v3.2 and after You can inline other templates within DAG and steps. Examples: DAG Steps Warning You can only inline once. Inline a DAG within a DAG will not work.","title":"Inline Templates"},{"location":"inline-templates/#inline-templates","text":"v3.2 and after You can inline other templates within DAG and steps. Examples: DAG Steps Warning You can only inline once. Inline a DAG within a DAG will not work.","title":"Inline Templates"},{"location":"installation/","text":"Installation \u00b6 Non-production installation \u00b6 If you just want to try out Argo Workflows in a non-production environment (including on desktop via minikube/kind/k3d etc) follow the quick-start guide . Production installation \u00b6 Installation Methods \u00b6 Official release manifests \u00b6 To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. You can use Kustomize to patch your preferred configurations on top of the base manifest. \u26a0\ufe0f If you are using GitOps, never use Kustomize remote base: this is dangerous. Instead, copy the manifests into your Git repo. \u26a0\ufe0f latest is tip, not stable. Never run it in production. Argo Workflows Helm Chart \u00b6 You can install Argo Workflows using the community maintained Helm charts . Installation options \u00b6 Determine your base installation option. A cluster install will watch and execute workflows in all namespaces. This is the default installation option when installing using the official release manifests. A namespace install only executes workflows in the namespace it is installed in (typically argo ). Look for namespace-install.yaml in the release assets . A managed namespace install : only executes workflows in a separate namespace from the one it is installed in. See Managed Namespace for more details. Additional installation considerations \u00b6 Review the following: Security . Scaling and running at massive scale . High-availability Disaster recovery","title":"Installation"},{"location":"installation/#installation","text":"","title":"Installation"},{"location":"installation/#non-production-installation","text":"If you just want to try out Argo Workflows in a non-production environment (including on desktop via minikube/kind/k3d etc) follow the quick-start guide .","title":"Non-production installation"},{"location":"installation/#production-installation","text":"","title":"Production installation"},{"location":"installation/#installation-methods","text":"","title":"Installation Methods"},{"location":"installation/#official-release-manifests","text":"To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. You can use Kustomize to patch your preferred configurations on top of the base manifest. \u26a0\ufe0f If you are using GitOps, never use Kustomize remote base: this is dangerous. Instead, copy the manifests into your Git repo. \u26a0\ufe0f latest is tip, not stable. Never run it in production.","title":"Official release manifests"},{"location":"installation/#argo-workflows-helm-chart","text":"You can install Argo Workflows using the community maintained Helm charts .","title":"Argo Workflows Helm Chart"},{"location":"installation/#installation-options","text":"Determine your base installation option. A cluster install will watch and execute workflows in all namespaces. This is the default installation option when installing using the official release manifests. A namespace install only executes workflows in the namespace it is installed in (typically argo ). Look for namespace-install.yaml in the release assets . A managed namespace install : only executes workflows in a separate namespace from the one it is installed in. See Managed Namespace for more details.","title":"Installation options"},{"location":"installation/#additional-installation-considerations","text":"Review the following: Security . Scaling and running at massive scale . High-availability Disaster recovery","title":"Additional installation considerations"},{"location":"intermediate-inputs/","text":"Intermediate Parameters \u00b6 v3.4 and after Traditionally, Argo workflows has supported input parameters from UI only when the workflow starts, and after that, it's pretty much on autopilot. But, there are a lot of use cases where human interaction is required. This interaction is in the form of providing input text in the middle of the workflow, choosing from a dropdown of the options which a workflow step itself is intelligently generating. A similar feature which you can see in jenkins is pipeline-input-step Example use cases include: A human approval before doing something in production environment. Programmatic generation of a list of inputs from which the user chooses. Choosing from a list of available databases which the workflow itself is generating. This feature is achieved via suspend template . The workflow will pause at a Suspend node, and user will be able to update parameters using fields type text or dropdown. Intermediate Parameters Approval Example \u00b6 The below example shows static enum values approval step. The user will be able to choose between [YES, NO] which will be used in subsequent steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-cicd- spec : entrypoint : cicd-pipeline templates : - name : cicd-pipeline steps : - - name : deploy-pre-prod template : deploy - - name : approval template : approval - - name : deploy-prod template : deploy when : '{{steps.approval.outputs.parameters.approve}} == YES' - name : approval suspend : {} inputs : parameters : - name : approve default : 'NO' enum : - 'YES' - 'NO' description : >- Choose YES to continue workflow and deploy to production outputs : parameters : - name : approve valueFrom : supplied : {} - name : deploy container : image : 'argoproj/argosay:v2' command : - /argosay args : - echo - deploying Intermediate Parameters DB Schema Update Example \u00b6 The below example shows programmatic generation of enum values. The generate-db-list template generates an output called db_list . This output is of type json . Since this json has a key called enum , with an array of options, the UI will parse this and display it as a dropdown. The output can be any string also, in which case the UI will display it as a text field. Which the user can later edit. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-db- spec : entrypoint : db-schema-update templates : - name : db-schema-update steps : - - name : generate-db-list template : generate-db-list - - name : choose-db template : choose-db arguments : parameters : - name : db_name value : '{{steps.generate-db-list.outputs.parameters.db_list}}' - - name : update-schema template : update-schema arguments : parameters : - name : db_name value : '{{steps.choose-db.outputs.parameters.db_name}}' - name : generate-db-list outputs : parameters : - name : db_list valueFrom : path : /tmp/db_list.txt container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - >- echo \"{\\\"enum\\\": [\\\"db1\\\", \\\"db2\\\", \\\"db3\\\"]}\" | tee /tmp/db_list.txt - name : choose-db inputs : parameters : - name : db_name description : >- Choose DB to update a schema outputs : parameters : - name : db_name valueFrom : supplied : {} suspend : {} - name : update-schema inputs : parameters : - name : db_name container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - echo Updating DB {{inputs.parameters.db_name}} Some Important Details \u00b6 The suspended node should have the SAME parameters defined in inputs.parameters and outputs.parameters . All the output parameters in the suspended node should have valueFrom.supplied: {} The selected values will be available at .outputs.parameters.","title":"Intermediate Parameters"},{"location":"intermediate-inputs/#intermediate-parameters","text":"v3.4 and after Traditionally, Argo workflows has supported input parameters from UI only when the workflow starts, and after that, it's pretty much on autopilot. But, there are a lot of use cases where human interaction is required. This interaction is in the form of providing input text in the middle of the workflow, choosing from a dropdown of the options which a workflow step itself is intelligently generating. A similar feature which you can see in jenkins is pipeline-input-step Example use cases include: A human approval before doing something in production environment. Programmatic generation of a list of inputs from which the user chooses. Choosing from a list of available databases which the workflow itself is generating. This feature is achieved via suspend template . The workflow will pause at a Suspend node, and user will be able to update parameters using fields type text or dropdown.","title":"Intermediate Parameters"},{"location":"intermediate-inputs/#intermediate-parameters-approval-example","text":"The below example shows static enum values approval step. The user will be able to choose between [YES, NO] which will be used in subsequent steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-cicd- spec : entrypoint : cicd-pipeline templates : - name : cicd-pipeline steps : - - name : deploy-pre-prod template : deploy - - name : approval template : approval - - name : deploy-prod template : deploy when : '{{steps.approval.outputs.parameters.approve}} == YES' - name : approval suspend : {} inputs : parameters : - name : approve default : 'NO' enum : - 'YES' - 'NO' description : >- Choose YES to continue workflow and deploy to production outputs : parameters : - name : approve valueFrom : supplied : {} - name : deploy container : image : 'argoproj/argosay:v2' command : - /argosay args : - echo - deploying","title":"Intermediate Parameters Approval Example"},{"location":"intermediate-inputs/#intermediate-parameters-db-schema-update-example","text":"The below example shows programmatic generation of enum values. The generate-db-list template generates an output called db_list . This output is of type json . Since this json has a key called enum , with an array of options, the UI will parse this and display it as a dropdown. The output can be any string also, in which case the UI will display it as a text field. Which the user can later edit. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-db- spec : entrypoint : db-schema-update templates : - name : db-schema-update steps : - - name : generate-db-list template : generate-db-list - - name : choose-db template : choose-db arguments : parameters : - name : db_name value : '{{steps.generate-db-list.outputs.parameters.db_list}}' - - name : update-schema template : update-schema arguments : parameters : - name : db_name value : '{{steps.choose-db.outputs.parameters.db_name}}' - name : generate-db-list outputs : parameters : - name : db_list valueFrom : path : /tmp/db_list.txt container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - >- echo \"{\\\"enum\\\": [\\\"db1\\\", \\\"db2\\\", \\\"db3\\\"]}\" | tee /tmp/db_list.txt - name : choose-db inputs : parameters : - name : db_name description : >- Choose DB to update a schema outputs : parameters : - name : db_name valueFrom : supplied : {} suspend : {} - name : update-schema inputs : parameters : - name : db_name container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - echo Updating DB {{inputs.parameters.db_name}}","title":"Intermediate Parameters DB Schema Update Example"},{"location":"intermediate-inputs/#some-important-details","text":"The suspended node should have the SAME parameters defined in inputs.parameters and outputs.parameters . All the output parameters in the suspended node should have valueFrom.supplied: {} The selected values will be available at .outputs.parameters.","title":"Some Important Details"},{"location":"key-only-artifacts/","text":"Key-Only Artifacts \u00b6 v3.0 and after A key-only artifact is an input or output artifact where you only specify the key, omitting the bucket, secrets etc. When these are omitted, the bucket/secrets from the configured artifact repository is used. This allows you to move the configuration of the artifact repository out of the workflow specification. This is closely related to artifact repository ref . You'll want to use them together for maximum benefit. This should probably be your default if you're using v3.0: Reduces the size of workflows (improved performance). User owned artifact repository set-up configuration (simplified management). Decouples the artifact location configuration from the workflow. Allowing you to re-configure the artifact repository without changing your workflows or templates. Example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : key-only-artifacts- spec : entrypoint : main templates : - name : main dag : tasks : - name : generate template : generate - name : consume template : consume dependencies : - generate - name : generate container : image : argoproj/argosay:v2 args : [ echo , hello , /mnt/file ] outputs : artifacts : - name : file path : /mnt/file s3 : key : my-file - name : consume container : image : argoproj/argosay:v2 args : [ cat , /tmp/file ] inputs : artifacts : - name : file path : /tmp/file s3 : key : my-file Warning The location data is not longer stored in /status/nodes . Any tooling that relies on this will need to be updated.","title":"Key-Only Artifacts"},{"location":"key-only-artifacts/#key-only-artifacts","text":"v3.0 and after A key-only artifact is an input or output artifact where you only specify the key, omitting the bucket, secrets etc. When these are omitted, the bucket/secrets from the configured artifact repository is used. This allows you to move the configuration of the artifact repository out of the workflow specification. This is closely related to artifact repository ref . You'll want to use them together for maximum benefit. This should probably be your default if you're using v3.0: Reduces the size of workflows (improved performance). User owned artifact repository set-up configuration (simplified management). Decouples the artifact location configuration from the workflow. Allowing you to re-configure the artifact repository without changing your workflows or templates. Example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : key-only-artifacts- spec : entrypoint : main templates : - name : main dag : tasks : - name : generate template : generate - name : consume template : consume dependencies : - generate - name : generate container : image : argoproj/argosay:v2 args : [ echo , hello , /mnt/file ] outputs : artifacts : - name : file path : /mnt/file s3 : key : my-file - name : consume container : image : argoproj/argosay:v2 args : [ cat , /tmp/file ] inputs : artifacts : - name : file path : /tmp/file s3 : key : my-file Warning The location data is not longer stored in /status/nodes . Any tooling that relies on this will need to be updated.","title":"Key-Only Artifacts"},{"location":"kubectl/","text":"kubectl \u00b6 You can also create Workflows directly with kubectl . However, the Argo CLI offers extra features that kubectl does not, such as YAML validation, workflow visualization, parameter passing, retries and resubmits, suspend and resume, and more. kubectl create -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml kubectl get wf -n argo kubectl get wf hello-world-xxx -n argo kubectl get po -n argo --selector = workflows.argoproj.io/workflow = hello-world-xxx kubectl logs hello-world-yyy -c main -n argo","title":"kubectl"},{"location":"kubectl/#kubectl","text":"You can also create Workflows directly with kubectl . However, the Argo CLI offers extra features that kubectl does not, such as YAML validation, workflow visualization, parameter passing, retries and resubmits, suspend and resume, and more. kubectl create -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml kubectl get wf -n argo kubectl get wf hello-world-xxx -n argo kubectl get po -n argo --selector = workflows.argoproj.io/workflow = hello-world-xxx kubectl logs hello-world-yyy -c main -n argo","title":"kubectl"},{"location":"lifecyclehook/","text":"Lifecycle-Hook \u00b6 v3.3 and after Introduction \u00b6 A LifecycleHook triggers an action based on a conditional expression or on completion of a step or template. It is configured either at the workflow-level or template-level, for instance as a function of the workflow.status or steps.status , respectively. A LifecycleHook executes during execution time and executes once. It will execute in parallel to its step or template once the expression is satisfied. In other words, a LifecycleHook functions like an exit handler with a conditional expression. You must not name a LifecycleHook exit or it becomes an exit handler; otherwise the hook name has no relevance. Workflow-level LifecycleHook : Executes the template when a configured expression is met during the workflow. Workflow-level Lifecycle-Hook example Template-level Lifecycle-Hook : Executes the template when a configured expression is met during the step in which it is defined. Template-level Lifecycle-Hook example Supported conditions \u00b6 Exit handler variables : workflow.status and workflow.failures template templateRef arguments Unsupported conditions \u00b6 outputs are not usable since LifecycleHook executes during execution time and outputs are not produced until the step is completed. You can use outputs from previous steps, just not the one you're hooking into. If you'd like to use outputs create an exit handler instead - all the status variable are available there so you can still conditionally decide what to do. Notification use case \u00b6 A LifecycleHook can be used to configure a notification depending on a workflow status change or template status change, like the example below: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : lifecycle-hook- spec : entrypoint : main hooks : exit : template : http running : expression : workflow.status == \"Running\" template : http templates : - name : main steps : - - name : step1 template : heads - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : http http : url : http://dummy.restapiexample.com/api/v1/employees Put differently, an exit handler is like a workflow-level LifecycleHook with an expression of workflow.status == \"Succeeded\" or workflow.status == \"Failed\" or workflow.status == \"Error\" .","title":"Lifecycle-Hook"},{"location":"lifecyclehook/#lifecycle-hook","text":"v3.3 and after","title":"Lifecycle-Hook"},{"location":"lifecyclehook/#introduction","text":"A LifecycleHook triggers an action based on a conditional expression or on completion of a step or template. It is configured either at the workflow-level or template-level, for instance as a function of the workflow.status or steps.status , respectively. A LifecycleHook executes during execution time and executes once. It will execute in parallel to its step or template once the expression is satisfied. In other words, a LifecycleHook functions like an exit handler with a conditional expression. You must not name a LifecycleHook exit or it becomes an exit handler; otherwise the hook name has no relevance. Workflow-level LifecycleHook : Executes the template when a configured expression is met during the workflow. Workflow-level Lifecycle-Hook example Template-level Lifecycle-Hook : Executes the template when a configured expression is met during the step in which it is defined. Template-level Lifecycle-Hook example","title":"Introduction"},{"location":"lifecyclehook/#supported-conditions","text":"Exit handler variables : workflow.status and workflow.failures template templateRef arguments","title":"Supported conditions"},{"location":"lifecyclehook/#unsupported-conditions","text":"outputs are not usable since LifecycleHook executes during execution time and outputs are not produced until the step is completed. You can use outputs from previous steps, just not the one you're hooking into. If you'd like to use outputs create an exit handler instead - all the status variable are available there so you can still conditionally decide what to do.","title":"Unsupported conditions"},{"location":"lifecyclehook/#notification-use-case","text":"A LifecycleHook can be used to configure a notification depending on a workflow status change or template status change, like the example below: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : lifecycle-hook- spec : entrypoint : main hooks : exit : template : http running : expression : workflow.status == \"Running\" template : http templates : - name : main steps : - - name : step1 template : heads - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : http http : url : http://dummy.restapiexample.com/api/v1/employees Put differently, an exit handler is like a workflow-level LifecycleHook with an expression of workflow.status == \"Succeeded\" or workflow.status == \"Failed\" or workflow.status == \"Error\" .","title":"Notification use case"},{"location":"links/","text":"Links \u00b6 v2.7 and after You can configure Argo Server to show custom links: A \"Get Help\" button in the bottom right of the window linking to you organization help pages or chat room. Deep-links to your facilities (e.g. logging facility) in the UI for both the workflow and each workflow pod. Adds a button to the top of workflow view to navigate to customized views. Links can contain placeholder variables. Placeholder variables are indicated by the dollar sign and curly braces: ${variable} . These are the commonly used variables: ${metadata.namespace} : Kubernetes namespace of the current workflow / pod / event source / sensor ${metadata.name} : Name of the current workflow / pod / event source / sensor ${status.startedAt} : Start time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z ${status.finishedAt} : End time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z . If the workflow/pod is still running, this variable will be null See workflow-controller-configmap.yaml for a complete example v3.1 and after Epoch time-stamps are available now. These are useful if we want to add links to logging facilities like Grafana or DataDog , as they support Unix epoch time-stamp formats as URL parameters: ${status.startedAtEpoch} : Start time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . ${status.finishedAtEpoch} : End time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . If the workflow/pod is still running, this variable will represent the current time. v3.1 and after In addition to the above variables, we can now access all workflow fields under ${workflow} . For example, one may find it useful to define a custom label in the workflow and access it by ${workflow.metadata.labels.custom_label_name} We can also access workflow fields in a pod link. For example, ${workflow.metadata.name} returns the name of the workflow instead of the name of the pod. If the field doesn't exist on the workflow then the value will be an empty string.","title":"Links"},{"location":"links/#links","text":"v2.7 and after You can configure Argo Server to show custom links: A \"Get Help\" button in the bottom right of the window linking to you organization help pages or chat room. Deep-links to your facilities (e.g. logging facility) in the UI for both the workflow and each workflow pod. Adds a button to the top of workflow view to navigate to customized views. Links can contain placeholder variables. Placeholder variables are indicated by the dollar sign and curly braces: ${variable} . These are the commonly used variables: ${metadata.namespace} : Kubernetes namespace of the current workflow / pod / event source / sensor ${metadata.name} : Name of the current workflow / pod / event source / sensor ${status.startedAt} : Start time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z ${status.finishedAt} : End time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z . If the workflow/pod is still running, this variable will be null See workflow-controller-configmap.yaml for a complete example v3.1 and after Epoch time-stamps are available now. These are useful if we want to add links to logging facilities like Grafana or DataDog , as they support Unix epoch time-stamp formats as URL parameters: ${status.startedAtEpoch} : Start time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . ${status.finishedAtEpoch} : End time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . If the workflow/pod is still running, this variable will represent the current time. v3.1 and after In addition to the above variables, we can now access all workflow fields under ${workflow} . For example, one may find it useful to define a custom label in the workflow and access it by ${workflow.metadata.labels.custom_label_name} We can also access workflow fields in a pod link. For example, ${workflow.metadata.name} returns the name of the workflow instead of the name of the pod. If the field doesn't exist on the workflow then the value will be an empty string.","title":"Links"},{"location":"managed-namespace/","text":"Managed Namespace \u00b6 v2.5 and after You can install Argo in either namespace scoped or cluster scoped configurations. The main difference is whether you install Roles or ClusterRoles, respectively. In namespace scoped configuration, you must run both the Workflow Controller and Argo Server using --namespaced . If you want to run workflows in a separate namespace, add --managed-namespace as well. (In cluster scoped configuration, don't include --namespaced or --managed-namespace .) For example: - args : - --configmap - workflow-controller-configmap - --executor-image - argoproj/workflow-controller:v2.5.1 - --namespaced - --managed-namespace - default Please note that both cluster scoped and namespace scoped configurations require \"admin\" roles to install because Argo's Custom Resource Definitions (CRDs) must be created (CRDs are cluster scoped objects). Example Use Case You can use a managed namespace install if you want some users or services to run Workflows without granting them privileges in the namespace where Argo Workflows is installed. For example, if you only run CI/CD Workflows that are maintained by the same team that manages the Argo Workflows installation, you may want a namespace install. But if all the Workflows are run by a separate data science team, you may want to give them a \"data-science-workflows\" namespace and use a managed namespace install of Argo Workflows in another namespace.","title":"Managed Namespace"},{"location":"managed-namespace/#managed-namespace","text":"v2.5 and after You can install Argo in either namespace scoped or cluster scoped configurations. The main difference is whether you install Roles or ClusterRoles, respectively. In namespace scoped configuration, you must run both the Workflow Controller and Argo Server using --namespaced . If you want to run workflows in a separate namespace, add --managed-namespace as well. (In cluster scoped configuration, don't include --namespaced or --managed-namespace .) For example: - args : - --configmap - workflow-controller-configmap - --executor-image - argoproj/workflow-controller:v2.5.1 - --namespaced - --managed-namespace - default Please note that both cluster scoped and namespace scoped configurations require \"admin\" roles to install because Argo's Custom Resource Definitions (CRDs) must be created (CRDs are cluster scoped objects). Example Use Case You can use a managed namespace install if you want some users or services to run Workflows without granting them privileges in the namespace where Argo Workflows is installed. For example, if you only run CI/CD Workflows that are maintained by the same team that manages the Argo Workflows installation, you may want a namespace install. But if all the Workflows are run by a separate data science team, you may want to give them a \"data-science-workflows\" namespace and use a managed namespace install of Argo Workflows in another namespace.","title":"Managed Namespace"},{"location":"manually-create-secrets/","text":"Service Account Secrets \u00b6 As of Kubernetes v1.24, secrets are no longer automatically created for service accounts. You must create a secret manually . You must also make the secret discoverable. You have two options: Option 1 - Discovery By Name \u00b6 Name your secret ${serviceAccountName}.service-account-token : apiVersion : v1 kind : Secret metadata : name : default.service-account-token annotations : kubernetes.io/service-account.name : default type : kubernetes.io/service-account-token This option is simpler than option 2, as you can create the secret and make it discoverable by name at the same time. Option 2 - Discovery By Annotation \u00b6 Annotate the service account with the secret name: apiVersion : v1 kind : ServiceAccount metadata : name : default annotations : workflows.argoproj.io/service-account-token.name : my-token This option is useful when the secret already exists, or the service account has a very long name.","title":"Service Account Secrets"},{"location":"manually-create-secrets/#service-account-secrets","text":"As of Kubernetes v1.24, secrets are no longer automatically created for service accounts. You must create a secret manually . You must also make the secret discoverable. You have two options:","title":"Service Account Secrets"},{"location":"manually-create-secrets/#option-1-discovery-by-name","text":"Name your secret ${serviceAccountName}.service-account-token : apiVersion : v1 kind : Secret metadata : name : default.service-account-token annotations : kubernetes.io/service-account.name : default type : kubernetes.io/service-account-token This option is simpler than option 2, as you can create the secret and make it discoverable by name at the same time.","title":"Option 1 - Discovery By Name"},{"location":"manually-create-secrets/#option-2-discovery-by-annotation","text":"Annotate the service account with the secret name: apiVersion : v1 kind : ServiceAccount metadata : name : default annotations : workflows.argoproj.io/service-account-token.name : my-token This option is useful when the secret already exists, or the service account has a very long name.","title":"Option 2 - Discovery By Annotation"},{"location":"memoization/","text":"Step Level Memoization \u00b6 v2.10 and after Introduction \u00b6 Workflows often have outputs that are expensive to compute. Memoization reduces cost and workflow execution time by recording the result of previously run steps: it stores the outputs of a template into a specified cache with a variable key. Prior to version 3.5 memoization only works for steps which have outputs, if you attempt to use it on steps which do not it should not work (there are some cases where it does, but they shouldn't). It was designed for 'pure' steps, where the purpose of running the step is to calculate some outputs based upon the step's inputs, and only the inputs. Pure steps should not interact with the outside world, but workflows won't enforce this on you. If you are using workflows prior to version 3.5 you should look at the work avoidance technique instead of memoization if your steps don't have outputs. In version 3.5 or later all steps can be memoized, whether or not they have outputs. Cache Method \u00b6 Currently, the cached data is stored in config-maps. This allows you to easily manipulate cache entries manually through kubectl and the Kubernetes API without having to go through Argo. All cache config-maps must have the label workflows.argoproj.io/configmap-type: Cache to be used as a cache. This prevents accidental access to other important config-maps in the system Using Memoization \u00b6 Memoization is set at the template level. You must specify a key , which can be static strings but more often depend on inputs. You must also specify a name for the config-map cache. Optionally you can set a maxAge in seconds or hours (e.g. 180s , 24h ) to define how long should it be considered valid. If an entry is older than the maxAge , it will be ignored. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : memoized-workflow- spec : entrypoint : whalesay templates : - name : whalesay memoize : key : \"{{inputs.parameters.message}}\" maxAge : \"10s\" cache : configMap : name : whalesay-cache Find a simple example for memoization here . Note In order to use memoization it is necessary to add the verbs create and update to the configmaps resource for the appropriate (cluster) roles. In the case of a cluster install the argo-cluster-role cluster role should be updated, whilst for a namespace install the argo-role role should be updated. FAQ \u00b6 If you see errors like error creating cache entry: ConfigMap \\\"reuse-task\\\" is invalid: []: Too long: must have at most 1048576 characters , this is due to the 1MB limit placed on the size of ConfigMap . Here are a couple of ways that might help resolve this: Delete the existing ConfigMap cache or switch to use a different cache. Reduce the size of the output parameters for the nodes that are being memoized. Split your cache into different memoization keys and cache names so that each cache entry is small. My step isn't getting memoized, why not? If you are running workflows <3.5 ensure that you have specified at least one output on the step.","title":"Step Level Memoization"},{"location":"memoization/#step-level-memoization","text":"v2.10 and after","title":"Step Level Memoization"},{"location":"memoization/#introduction","text":"Workflows often have outputs that are expensive to compute. Memoization reduces cost and workflow execution time by recording the result of previously run steps: it stores the outputs of a template into a specified cache with a variable key. Prior to version 3.5 memoization only works for steps which have outputs, if you attempt to use it on steps which do not it should not work (there are some cases where it does, but they shouldn't). It was designed for 'pure' steps, where the purpose of running the step is to calculate some outputs based upon the step's inputs, and only the inputs. Pure steps should not interact with the outside world, but workflows won't enforce this on you. If you are using workflows prior to version 3.5 you should look at the work avoidance technique instead of memoization if your steps don't have outputs. In version 3.5 or later all steps can be memoized, whether or not they have outputs.","title":"Introduction"},{"location":"memoization/#cache-method","text":"Currently, the cached data is stored in config-maps. This allows you to easily manipulate cache entries manually through kubectl and the Kubernetes API without having to go through Argo. All cache config-maps must have the label workflows.argoproj.io/configmap-type: Cache to be used as a cache. This prevents accidental access to other important config-maps in the system","title":"Cache Method"},{"location":"memoization/#using-memoization","text":"Memoization is set at the template level. You must specify a key , which can be static strings but more often depend on inputs. You must also specify a name for the config-map cache. Optionally you can set a maxAge in seconds or hours (e.g. 180s , 24h ) to define how long should it be considered valid. If an entry is older than the maxAge , it will be ignored. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : memoized-workflow- spec : entrypoint : whalesay templates : - name : whalesay memoize : key : \"{{inputs.parameters.message}}\" maxAge : \"10s\" cache : configMap : name : whalesay-cache Find a simple example for memoization here . Note In order to use memoization it is necessary to add the verbs create and update to the configmaps resource for the appropriate (cluster) roles. In the case of a cluster install the argo-cluster-role cluster role should be updated, whilst for a namespace install the argo-role role should be updated.","title":"Using Memoization"},{"location":"memoization/#faq","text":"If you see errors like error creating cache entry: ConfigMap \\\"reuse-task\\\" is invalid: []: Too long: must have at most 1048576 characters , this is due to the 1MB limit placed on the size of ConfigMap . Here are a couple of ways that might help resolve this: Delete the existing ConfigMap cache or switch to use a different cache. Reduce the size of the output parameters for the nodes that are being memoized. Split your cache into different memoization keys and cache names so that each cache entry is small. My step isn't getting memoized, why not? If you are running workflows <3.5 ensure that you have specified at least one output on the step.","title":"FAQ"},{"location":"metrics/","text":"Prometheus Metrics \u00b6 v2.7 and after Introduction \u00b6 Argo emits a certain number of controller metrics that inform on the state of the controller at any given time. Furthermore, users can also define their own custom metrics to inform on the state of their Workflows. Custom Prometheus metrics can be defined to be emitted on a Workflow - and Template -level basis. These can be useful for many cases; some examples: Keeping track of the duration of a Workflow or Template over time, and setting an alert if it goes beyond a threshold Keeping track of the number of times a Workflow or Template fails over time Reporting an important internal metric, such as a model training score or an internal error rate Emitting custom metrics with Argo is easy, but it's important to understand what makes a good Prometheus metric and the best way to define metrics in Argo to avoid problems such as cardinality explosion . Metrics and metrics in Argo \u00b6 There are two kinds of metrics emitted by Argo: controller metrics and custom metrics . Controller metrics \u00b6 Metrics that inform on the state of the controller; i.e., they answer the question \"What is the state of the controller right now?\" Default controller metrics can be scraped from service workflow-controller-metrics at the endpoint :9090/metrics Custom metrics \u00b6 Metrics that inform on the state of a Workflow, or a series of Workflows. These custom metrics are defined by the user in the Workflow spec. Emitting custom metrics is the responsibility of the emitter owner. Since the user defines Workflows in Argo, the user is responsible for emitting metrics correctly. What is and isn't a Prometheus metric \u00b6 Prometheus metrics should be thought of as ephemeral data points of running processes; i.e., they are the answer to the question \"What is the state of my system right now ?\". Metrics should report things such as: a counter of the number of times a workflow or steps has failed, or a gauge of workflow duration, or an average of an internal metric such as a model training score or error rate. Metrics are then routinely scraped and stored and -- when they are correctly designed -- they can represent time series. Aggregating the examples above over time could answer useful questions such as: How has the error rate of this workflow or step changed over time? How has the duration of this workflow changed over time? Is the current workflow running for too long? Is our model improving over time? Prometheus metrics should not be thought of as a store of data. Since metrics should only report the state of the system at the current time, they should not be used to report historical data such as: the status of an individual instance of a workflow, or how long a particular instance of a step took to run. Metrics are also ephemeral, meaning there is no guarantee that they will be persisted for any amount of time. If you need a way to view and analyze historical data, consider the workflow archive or reporting to logs. Default Controller Metrics \u00b6 Metrics for the Four Golden Signals are: Latency: argo_workflows_queue_latency Traffic: argo_workflows_count and argo_workflows_queue_depth_count Errors: argo_workflows_count and argo_workflows_error_count Saturation: argo_workflows_workers_busy and argo_workflows_workflow_condition argo_pod_missing \u00b6 Pods were not seen. E.g. by being deleted by Kubernetes. You should only see this under high load. Note This metric's name starts with argo_ not argo_workflows_ . argo_workflows_count \u00b6 Number of workflow in each phase. The Running count does not mean that a workflows pods are running, just that the controller has scheduled them. A workflow can be stuck in Running with pending pods for a long time. argo_workflows_error_count \u00b6 A count of certain errors incurred by the controller. argo_workflows_k8s_request_total \u00b6 Number of API requests sent to the Kubernetes API. argo_workflows_operation_duration_seconds \u00b6 A histogram of durations of operations. An operation is a single workflow reconciliation loop within the workflow-controller. It's the time for the controller to process a single workflow after it has been read from the cluster and is a measure of the performance of the controller affected by the complexity of the workflow. argo_workflows_pods_count \u00b6 It is possible for a workflow to start, but no pods be running (e.g. cluster is too busy to run them). This metric sheds light on actual work being done. argo_workflows_queue_adds_count \u00b6 The number of additions to the queue of workflows or cron workflows. argo_workflows_queue_depth_count \u00b6 The depth of the queue of workflows or cron workflows to be processed by the controller. argo_workflows_queue_latency \u00b6 The time workflows or cron workflows spend in the queue waiting to be processed. argo_workflows_workers_busy \u00b6 The number of workers that are busy. argo_workflows_workflow_condition \u00b6 The number of workflow with different conditions. This will tell you the number of workflows with running pods. argo_workflows_workflows_processed_count \u00b6 A count of all Workflow updates processed by the controller. Metric types \u00b6 Please see the Prometheus docs on metric types . How metrics work in Argo \u00b6 In order to analyze the behavior of a workflow over time, we need to be able to link different instances (i.e. individual executions) of a workflow together into a \"series\" for the purposes of emitting metrics. We do so by linking them together with the same metric descriptor. In Prometheus, a metric descriptor is defined as a metric's name and its key-value labels. For example, for a metric tracking the duration of model execution over time, a metric descriptor could be: argo_workflows_model_exec_time{model_name=\"model_a\",phase=\"validation\"} This metric then represents the amount of time that \"Model A\" took to train in the phase \"Validation\". It is important to understand that the metric name and its labels form the descriptor: argo_workflows_model_exec_time{model_name=\"model_b\",phase=\"validation\"} is a different metric (and will track a different \"series\" altogether). Now, whenever we run our first workflow that validates \"Model A\" a metric with the amount of time it took it to do so will be created and emitted. For each subsequent time that this happens, no new metrics will be emitted and the same metric will be updated with the new value. Since, in effect, we are interested on the execution time of \"validation\" of \"Model A\" over time, we are no longer interested in the previous metric and can assume it has already been scraped. In summary, whenever you want to track a particular metric over time, you should use the same metric name and metric labels wherever it is emitted. This is how these metrics are \"linked\" as belonging to the same series. Grafana Dashboard for Argo Controller Metrics \u00b6 Please see the Argo Workflows metrics Grafana dashboard. Defining metrics \u00b6 Metrics are defined in-place on the Workflow/Step/Task where they are emitted from. Metrics are always processed after the Workflow/Step/Task completes, with the exception of real-time metrics . Metric definitions must include a name and a help doc string. They can also include any number of labels (when defining labels avoid cardinality explosion). Metrics with the same name must always use the same exact help string, having different metrics with the same name, but with a different help string will cause an error (this is a Prometheus requirement). All metrics can also be conditionally emitted by defining a when clause. This when clause works the same as elsewhere in a workflow. A metric must also have a type, it can be one of gauge , histogram , and counter ( see below ). Within the metric type a value must be specified. This value can be either a literal value of be an Argo variable . When defining a histogram , buckets must also be provided (see below). Argo variables can be included anywhere in the metric spec, such as in labels , name , help , when , etc. Metric names can only contain alphanumeric characters, _ , and : . Metric Spec \u00b6 In Argo you can define a metric on the Workflow level or on the Template level. Here is an example of a Workflow level Gauge metric that will report the Workflow duration time: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : model-training- spec : entrypoint : steps metrics : prometheus : - name : exec_duration_gauge # Metric name (will be prepended with \"argo_workflows_\") labels : # Labels are optional. Avoid cardinality explosion. - key : name value : model_a help : \"Duration gauge by name\" # A help doc describing your metric. This is required. gauge : # The metric type. Available are \"gauge\", \"histogram\", and \"counter\". value : \"{{workflow.duration}}\" # The value of your metric. It could be an Argo variable (see variables doc) or a literal value ... An example of a Template -level Counter metric that will increase a counter every time the step fails: ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey when : \"{{status}} == Failed\" # Emit the metric conditionally. Works the same as normal \"when\" counter : value : \"1\" # This increments the counter by 1 container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... A similar example of such a Counter metric that will increase for every step status ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey - key : status value : \"{{status}}\" # Argo variable in `labels` counter : value : \"1\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... Finally, an example of a Template -level Histogram metric that tracks an internal value: ... templates : - name : random-int metrics : prometheus : - name : random_int_step_histogram help : \"Value of the int emitted by random-int at step level\" when : \"{{status}} == Succeeded\" # Only emit metric when step succeeds histogram : buckets : # Bins must be defined for histogram metrics - 2.01 # and are part of the metric descriptor. - 4.01 # All metrics in this series MUST have the - 6.01 # same buckets. - 8.01 - 10.01 value : \"{{outputs.parameters.rand-int-value}}\" # References itself for its output (see variables doc) outputs : parameters : - name : rand-int-value globalName : rand-int-value valueFrom : path : /tmp/rand_int.txt container : image : alpine:latest command : [ sh , -c ] args : [ \"RAND_INT=$((1 + RANDOM % 10)); echo $RAND_INT; echo $RAND_INT > /tmp/rand_int.txt\" ] ... Real-Time Metrics \u00b6 Argo supports a limited number of real-time metrics. These metrics are emitted in real-time, beginning when the step execution starts and ending when it completes. Real-time metrics are only available on Gauge type metrics and with a limited number of variables . To define a real-time metric simply add realtime: true to a gauge metric with a valid real-time variable. For example: gauge : realtime : true value : \"{{duration}}\" Metrics endpoint \u00b6 By default, metrics are emitted by the workflow-controller on port 9090 on the /metrics path. By port-forwarding to the pod you can view the metrics in your browser at http://localhost:9090/metrics : kubectl -n argo port-forward deploy/workflow-controller 9090:9090 A metrics service is not installed as part of the default installation so you will need to add one if you wish to use a Prometheus Service Monitor: cat <:9090/metrics","title":"Controller metrics"},{"location":"metrics/#custom-metrics","text":"Metrics that inform on the state of a Workflow, or a series of Workflows. These custom metrics are defined by the user in the Workflow spec. Emitting custom metrics is the responsibility of the emitter owner. Since the user defines Workflows in Argo, the user is responsible for emitting metrics correctly.","title":"Custom metrics"},{"location":"metrics/#what-is-and-isnt-a-prometheus-metric","text":"Prometheus metrics should be thought of as ephemeral data points of running processes; i.e., they are the answer to the question \"What is the state of my system right now ?\". Metrics should report things such as: a counter of the number of times a workflow or steps has failed, or a gauge of workflow duration, or an average of an internal metric such as a model training score or error rate. Metrics are then routinely scraped and stored and -- when they are correctly designed -- they can represent time series. Aggregating the examples above over time could answer useful questions such as: How has the error rate of this workflow or step changed over time? How has the duration of this workflow changed over time? Is the current workflow running for too long? Is our model improving over time? Prometheus metrics should not be thought of as a store of data. Since metrics should only report the state of the system at the current time, they should not be used to report historical data such as: the status of an individual instance of a workflow, or how long a particular instance of a step took to run. Metrics are also ephemeral, meaning there is no guarantee that they will be persisted for any amount of time. If you need a way to view and analyze historical data, consider the workflow archive or reporting to logs.","title":"What is and isn't a Prometheus metric"},{"location":"metrics/#default-controller-metrics","text":"Metrics for the Four Golden Signals are: Latency: argo_workflows_queue_latency Traffic: argo_workflows_count and argo_workflows_queue_depth_count Errors: argo_workflows_count and argo_workflows_error_count Saturation: argo_workflows_workers_busy and argo_workflows_workflow_condition","title":"Default Controller Metrics"},{"location":"metrics/#argo_pod_missing","text":"Pods were not seen. E.g. by being deleted by Kubernetes. You should only see this under high load. Note This metric's name starts with argo_ not argo_workflows_ .","title":"argo_pod_missing"},{"location":"metrics/#argo_workflows_count","text":"Number of workflow in each phase. The Running count does not mean that a workflows pods are running, just that the controller has scheduled them. A workflow can be stuck in Running with pending pods for a long time.","title":"argo_workflows_count"},{"location":"metrics/#argo_workflows_error_count","text":"A count of certain errors incurred by the controller.","title":"argo_workflows_error_count"},{"location":"metrics/#argo_workflows_k8s_request_total","text":"Number of API requests sent to the Kubernetes API.","title":"argo_workflows_k8s_request_total"},{"location":"metrics/#argo_workflows_operation_duration_seconds","text":"A histogram of durations of operations. An operation is a single workflow reconciliation loop within the workflow-controller. It's the time for the controller to process a single workflow after it has been read from the cluster and is a measure of the performance of the controller affected by the complexity of the workflow.","title":"argo_workflows_operation_duration_seconds"},{"location":"metrics/#argo_workflows_pods_count","text":"It is possible for a workflow to start, but no pods be running (e.g. cluster is too busy to run them). This metric sheds light on actual work being done.","title":"argo_workflows_pods_count"},{"location":"metrics/#argo_workflows_queue_adds_count","text":"The number of additions to the queue of workflows or cron workflows.","title":"argo_workflows_queue_adds_count"},{"location":"metrics/#argo_workflows_queue_depth_count","text":"The depth of the queue of workflows or cron workflows to be processed by the controller.","title":"argo_workflows_queue_depth_count"},{"location":"metrics/#argo_workflows_queue_latency","text":"The time workflows or cron workflows spend in the queue waiting to be processed.","title":"argo_workflows_queue_latency"},{"location":"metrics/#argo_workflows_workers_busy","text":"The number of workers that are busy.","title":"argo_workflows_workers_busy"},{"location":"metrics/#argo_workflows_workflow_condition","text":"The number of workflow with different conditions. This will tell you the number of workflows with running pods.","title":"argo_workflows_workflow_condition"},{"location":"metrics/#argo_workflows_workflows_processed_count","text":"A count of all Workflow updates processed by the controller.","title":"argo_workflows_workflows_processed_count"},{"location":"metrics/#metric-types","text":"Please see the Prometheus docs on metric types .","title":"Metric types"},{"location":"metrics/#how-metrics-work-in-argo","text":"In order to analyze the behavior of a workflow over time, we need to be able to link different instances (i.e. individual executions) of a workflow together into a \"series\" for the purposes of emitting metrics. We do so by linking them together with the same metric descriptor. In Prometheus, a metric descriptor is defined as a metric's name and its key-value labels. For example, for a metric tracking the duration of model execution over time, a metric descriptor could be: argo_workflows_model_exec_time{model_name=\"model_a\",phase=\"validation\"} This metric then represents the amount of time that \"Model A\" took to train in the phase \"Validation\". It is important to understand that the metric name and its labels form the descriptor: argo_workflows_model_exec_time{model_name=\"model_b\",phase=\"validation\"} is a different metric (and will track a different \"series\" altogether). Now, whenever we run our first workflow that validates \"Model A\" a metric with the amount of time it took it to do so will be created and emitted. For each subsequent time that this happens, no new metrics will be emitted and the same metric will be updated with the new value. Since, in effect, we are interested on the execution time of \"validation\" of \"Model A\" over time, we are no longer interested in the previous metric and can assume it has already been scraped. In summary, whenever you want to track a particular metric over time, you should use the same metric name and metric labels wherever it is emitted. This is how these metrics are \"linked\" as belonging to the same series.","title":"How metrics work in Argo"},{"location":"metrics/#grafana-dashboard-for-argo-controller-metrics","text":"Please see the Argo Workflows metrics Grafana dashboard.","title":"Grafana Dashboard for Argo Controller Metrics"},{"location":"metrics/#defining-metrics","text":"Metrics are defined in-place on the Workflow/Step/Task where they are emitted from. Metrics are always processed after the Workflow/Step/Task completes, with the exception of real-time metrics . Metric definitions must include a name and a help doc string. They can also include any number of labels (when defining labels avoid cardinality explosion). Metrics with the same name must always use the same exact help string, having different metrics with the same name, but with a different help string will cause an error (this is a Prometheus requirement). All metrics can also be conditionally emitted by defining a when clause. This when clause works the same as elsewhere in a workflow. A metric must also have a type, it can be one of gauge , histogram , and counter ( see below ). Within the metric type a value must be specified. This value can be either a literal value of be an Argo variable . When defining a histogram , buckets must also be provided (see below). Argo variables can be included anywhere in the metric spec, such as in labels , name , help , when , etc. Metric names can only contain alphanumeric characters, _ , and : .","title":"Defining metrics"},{"location":"metrics/#metric-spec","text":"In Argo you can define a metric on the Workflow level or on the Template level. Here is an example of a Workflow level Gauge metric that will report the Workflow duration time: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : model-training- spec : entrypoint : steps metrics : prometheus : - name : exec_duration_gauge # Metric name (will be prepended with \"argo_workflows_\") labels : # Labels are optional. Avoid cardinality explosion. - key : name value : model_a help : \"Duration gauge by name\" # A help doc describing your metric. This is required. gauge : # The metric type. Available are \"gauge\", \"histogram\", and \"counter\". value : \"{{workflow.duration}}\" # The value of your metric. It could be an Argo variable (see variables doc) or a literal value ... An example of a Template -level Counter metric that will increase a counter every time the step fails: ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey when : \"{{status}} == Failed\" # Emit the metric conditionally. Works the same as normal \"when\" counter : value : \"1\" # This increments the counter by 1 container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... A similar example of such a Counter metric that will increase for every step status ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey - key : status value : \"{{status}}\" # Argo variable in `labels` counter : value : \"1\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... Finally, an example of a Template -level Histogram metric that tracks an internal value: ... templates : - name : random-int metrics : prometheus : - name : random_int_step_histogram help : \"Value of the int emitted by random-int at step level\" when : \"{{status}} == Succeeded\" # Only emit metric when step succeeds histogram : buckets : # Bins must be defined for histogram metrics - 2.01 # and are part of the metric descriptor. - 4.01 # All metrics in this series MUST have the - 6.01 # same buckets. - 8.01 - 10.01 value : \"{{outputs.parameters.rand-int-value}}\" # References itself for its output (see variables doc) outputs : parameters : - name : rand-int-value globalName : rand-int-value valueFrom : path : /tmp/rand_int.txt container : image : alpine:latest command : [ sh , -c ] args : [ \"RAND_INT=$((1 + RANDOM % 10)); echo $RAND_INT; echo $RAND_INT > /tmp/rand_int.txt\" ] ...","title":"Metric Spec"},{"location":"metrics/#real-time-metrics","text":"Argo supports a limited number of real-time metrics. These metrics are emitted in real-time, beginning when the step execution starts and ending when it completes. Real-time metrics are only available on Gauge type metrics and with a limited number of variables . To define a real-time metric simply add realtime: true to a gauge metric with a valid real-time variable. For example: gauge : realtime : true value : \"{{duration}}\"","title":"Real-Time Metrics"},{"location":"metrics/#metrics-endpoint","text":"By default, metrics are emitted by the workflow-controller on port 9090 on the /metrics path. By port-forwarding to the pod you can view the metrics in your browser at http://localhost:9090/metrics : kubectl -n argo port-forward deploy/workflow-controller 9090:9090 A metrics service is not installed as part of the default installation so you will need to add one if you wish to use a Prometheus Service Monitor: cat <.value The value of input parameter NAME The operator can be '=' or '!='. Multiple selectors can be combined with a comma, in which case they are anded together. Examples \u00b6 To filter for nodes where the input parameter 'foo' is equal to 'bar': --node-field-selector = inputs.parameters.foo.value = bar To filter for nodes where the input parameter 'foo' is equal to 'bar' and phase is not running: --node-field-selector = foo1 = bar1,phase! = Running Consider the following workflow: \u25cf appr-promotion-ffsv4 code-release \u251c\u2500\u2714 start sample-template/email appr-promotion-ffsv4-3704914002 2s \u251c\u2500\u25cf app1 wftempl1/approval-and-promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-524476380 2s \u2502 \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval \u251c\u2500\u2714 app2 wftempl2/promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-2580536603 2s \u2502 \u251c\u2500\u2714 pr-approval sample-template/approval appr-promotion-ffsv4-3445567645 2s \u2502 \u2514\u2500\u2714 deployment sample-template/promote appr-promotion-ffsv4-970728982 1s \u2514\u2500\u25cf app3 wftempl1/approval-and-promotion \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-388318034 2s \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval Here we have two steps with the same displayName : wait-approval . To select one to suspend, we need to use their name , either appr-promotion-ffsv4.app1.wait-approval or appr-promotion-ffsv4.app3.wait-approval . If it is not clear what the full name of a node is, it can be found using kubectl : $ kubectl get wf appr-promotion-ffsv4 -o yaml ... appr-promotion-ffsv4-3235686597: boundaryID: appr-promotion-ffsv4-3079407832 displayName: wait-approval # <- Display Name finishedAt: null id: appr-promotion-ffsv4-3235686597 name: appr-promotion-ffsv4.app1.wait-approval # <- Full Name phase: Running startedAt: \"2021-01-20T17:00:25Z\" templateRef: name: sample-template template: waiting-for-approval templateScope: namespaced/wftempl1 type: Suspend ...","title":"Node Field Selectors"},{"location":"node-field-selector/#node-field-selectors","text":"v2.8 and after","title":"Node Field Selectors"},{"location":"node-field-selector/#introduction","text":"The resume, stop and retry Argo CLI and API commands support a --node-field-selector parameter to allow the user to select a subset of nodes for the command to apply to. In the case of the resume and stop commands these are the nodes that should be resumed or stopped. In the case of the retry command it allows specifying nodes that should be restarted even if they were previously successful (and must be used in combination with --restart-successful ) The format of this when used with the CLI is: --node-field-selector = FIELD = VALUE","title":"Introduction"},{"location":"node-field-selector/#possible-options","text":"The field can be any of: Field Description displayName Display name of the node. This is the name of the node as it is displayed on the CLI or UI, without considering its ancestors (see example below). This is a useful shortcut if there is only one node with the same displayName name Full name of the node. This is the full name of the node, including its ancestors (see example below). Using name is necessary when two or more nodes share the same displayName and disambiguation is required. templateName Template name of the node phase Phase status of the node - e.g. Running templateRef.name The name of the workflow template the node is referring to templateRef.template The template within the workflow template the node is referring to inputs.parameters..value The value of input parameter NAME The operator can be '=' or '!='. Multiple selectors can be combined with a comma, in which case they are anded together.","title":"Possible options"},{"location":"node-field-selector/#examples","text":"To filter for nodes where the input parameter 'foo' is equal to 'bar': --node-field-selector = inputs.parameters.foo.value = bar To filter for nodes where the input parameter 'foo' is equal to 'bar' and phase is not running: --node-field-selector = foo1 = bar1,phase! = Running Consider the following workflow: \u25cf appr-promotion-ffsv4 code-release \u251c\u2500\u2714 start sample-template/email appr-promotion-ffsv4-3704914002 2s \u251c\u2500\u25cf app1 wftempl1/approval-and-promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-524476380 2s \u2502 \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval \u251c\u2500\u2714 app2 wftempl2/promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-2580536603 2s \u2502 \u251c\u2500\u2714 pr-approval sample-template/approval appr-promotion-ffsv4-3445567645 2s \u2502 \u2514\u2500\u2714 deployment sample-template/promote appr-promotion-ffsv4-970728982 1s \u2514\u2500\u25cf app3 wftempl1/approval-and-promotion \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-388318034 2s \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval Here we have two steps with the same displayName : wait-approval . To select one to suspend, we need to use their name , either appr-promotion-ffsv4.app1.wait-approval or appr-promotion-ffsv4.app3.wait-approval . If it is not clear what the full name of a node is, it can be found using kubectl : $ kubectl get wf appr-promotion-ffsv4 -o yaml ... appr-promotion-ffsv4-3235686597: boundaryID: appr-promotion-ffsv4-3079407832 displayName: wait-approval # <- Display Name finishedAt: null id: appr-promotion-ffsv4-3235686597 name: appr-promotion-ffsv4.app1.wait-approval # <- Full Name phase: Running startedAt: \"2021-01-20T17:00:25Z\" templateRef: name: sample-template template: waiting-for-approval templateScope: namespaced/wftempl1 type: Suspend ...","title":"Examples"},{"location":"offloading-large-workflows/","text":"Offloading Large Workflows \u00b6 v2.4 and after Argo stores workflows as Kubernetes resources (i.e. within EtcD). This creates a limit to their size as resources must be under 1MB. Each resource includes the status of each node, which is stored in the /status/nodes field for the resource. This can be over 1MB. If this happens, we try and compress the node status and store it in /status/compressedNodes . If the status is still too large, we then try and store it in an SQL database. To enable this feature, configure a Postgres or MySQL database under persistence in your configuration and set nodeStatusOffLoad: true . FAQ \u00b6 Why aren't my workflows appearing in the database? \u00b6 Offloading is expensive and often unnecessary, so we only offload when we need to. Your workflows aren't probably large enough. Error Failed to submit workflow: etcdserver: request is too large. \u00b6 You must use the Argo CLI having exported export ARGO_SERVER=... . Error offload node status is not supported \u00b6 Even after compressing node statuses, the workflow exceeded the EtcD size limit. To resolve, either enable node status offload as described above or look for ways to reduce the size of your workflow manifest: Use withItems or withParams to consolidate similar templates into a single parametrized template Use template defaults to factor shared template options to the workflow level Use workflow templates to factor frequently-used templates into separate resources Use workflows of workflows to factor a large workflow into a workflow of smaller workflows","title":"Offloading Large Workflows"},{"location":"offloading-large-workflows/#offloading-large-workflows","text":"v2.4 and after Argo stores workflows as Kubernetes resources (i.e. within EtcD). This creates a limit to their size as resources must be under 1MB. Each resource includes the status of each node, which is stored in the /status/nodes field for the resource. This can be over 1MB. If this happens, we try and compress the node status and store it in /status/compressedNodes . If the status is still too large, we then try and store it in an SQL database. To enable this feature, configure a Postgres or MySQL database under persistence in your configuration and set nodeStatusOffLoad: true .","title":"Offloading Large Workflows"},{"location":"offloading-large-workflows/#faq","text":"","title":"FAQ"},{"location":"offloading-large-workflows/#why-arent-my-workflows-appearing-in-the-database","text":"Offloading is expensive and often unnecessary, so we only offload when we need to. Your workflows aren't probably large enough.","title":"Why aren't my workflows appearing in the database?"},{"location":"offloading-large-workflows/#error-failed-to-submit-workflow-etcdserver-request-is-too-large","text":"You must use the Argo CLI having exported export ARGO_SERVER=... .","title":"Error Failed to submit workflow: etcdserver: request is too large."},{"location":"offloading-large-workflows/#error-offload-node-status-is-not-supported","text":"Even after compressing node statuses, the workflow exceeded the EtcD size limit. To resolve, either enable node status offload as described above or look for ways to reduce the size of your workflow manifest: Use withItems or withParams to consolidate similar templates into a single parametrized template Use template defaults to factor shared template options to the workflow level Use workflow templates to factor frequently-used templates into separate resources Use workflows of workflows to factor a large workflow into a workflow of smaller workflows","title":"Error offload node status is not supported"},{"location":"plugin-directory/","text":"Plugin Directory \u00b6 \u26a0\ufe0f Disclaimer: We take only minimal action to verify the authenticity of plugins. Install at your own risk. Name Description Hello Hello world plugin you can use as a template Slack Example Slack plugin Argo CD Sync Argo CD apps, e.g. to use Argo as CI Volcano Job Plugin Execute Volcano Job Python Plugin for executing Python Hermes Send notifications, e.g. Slack WASM Run Web Assembly (WASM) tasks Chaos Mesh Plugin Run Chaos Mesh experiment Pull Request Build Status Send build status of pull request to Git provider Atomic Workflow Plugin Stop the workflows which comes from the same WorkflowTemplate and have the same parameters AWS Plugin Argo Workflows Executor Plugin for AWS Services, e.g. SageMaker Pipelines, Glue, etc.","title":"Plugin Directory"},{"location":"plugin-directory/#plugin-directory","text":"\u26a0\ufe0f Disclaimer: We take only minimal action to verify the authenticity of plugins. Install at your own risk. Name Description Hello Hello world plugin you can use as a template Slack Example Slack plugin Argo CD Sync Argo CD apps, e.g. to use Argo as CI Volcano Job Plugin Execute Volcano Job Python Plugin for executing Python Hermes Send notifications, e.g. Slack WASM Run Web Assembly (WASM) tasks Chaos Mesh Plugin Run Chaos Mesh experiment Pull Request Build Status Send build status of pull request to Git provider Atomic Workflow Plugin Stop the workflows which comes from the same WorkflowTemplate and have the same parameters AWS Plugin Argo Workflows Executor Plugin for AWS Services, e.g. SageMaker Pipelines, Glue, etc.","title":"Plugin Directory"},{"location":"plugins/","text":"Plugins \u00b6 Plugins allow you to extend Argo Workflows to add new capabilities. You don't need to learn Golang, you can write in any language, including Python. Simple: a plugin just responds to RPC HTTP requests. You can iterate quickly by changing the plugin at runtime. You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software release. Executor plugins can be written and installed by both users and admins.","title":"Plugins"},{"location":"plugins/#plugins","text":"Plugins allow you to extend Argo Workflows to add new capabilities. You don't need to learn Golang, you can write in any language, including Python. Simple: a plugin just responds to RPC HTTP requests. You can iterate quickly by changing the plugin at runtime. You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software release. Executor plugins can be written and installed by both users and admins.","title":"Plugins"},{"location":"progress/","text":"Workflow Progress \u00b6 v2.12 and after When you run a workflow, the controller will report on its progress. We define progress as two numbers, N/M such that 0 <= N <= M and 0 <= M . N is the number of completed tasks. M is the total number of tasks. E.g. 0/0 , 0/1 or 50/100 . Unlike estimated duration , progress is deterministic. I.e. it will be the same for each workflow, regardless of any problems. Progress for each node is calculated as follows: For a pod node either 1/1 if completed or 0/1 otherwise. For non-leaf nodes, the sum of its children. For a whole workflow's, progress is the sum of all its leaf nodes. Warning M will increase during workflow run each time a node is added to the graph. Self reporting progress \u00b6 v3.3 and after Pods in a workflow can report their own progress during their runtime. This self reported progress overrides the auto-generated progress. Reporting progress works as follows: create and write the progress to a file indicated by the env variable ARGO_PROGRESS_FILE format of the progress must be N/M The executor will read this file every 3s and if there was an update, patch the pod annotations with workflows.argoproj.io/progress: N/M . The controller picks this up and writes the progress to the appropriate Status properties. Initially the progress of a workflows' pod is always 0/1 . If you want to influence this, make sure to set an initial progress annotation on the pod: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : progress- spec : entrypoint : main templates : - name : main dag : tasks : - name : progress template : progress - name : progress metadata : annotations : workflows.argoproj.io/progress : 0/100 container : image : alpine:3.14 command : [ \"/bin/sh\" , \"-c\" ] args : - | for i in `seq 1 10`; do sleep 10; echo \"$(($i*10))\"'/100' > $ARGO_PROGRESS_FILE; done","title":"Workflow Progress"},{"location":"progress/#workflow-progress","text":"v2.12 and after When you run a workflow, the controller will report on its progress. We define progress as two numbers, N/M such that 0 <= N <= M and 0 <= M . N is the number of completed tasks. M is the total number of tasks. E.g. 0/0 , 0/1 or 50/100 . Unlike estimated duration , progress is deterministic. I.e. it will be the same for each workflow, regardless of any problems. Progress for each node is calculated as follows: For a pod node either 1/1 if completed or 0/1 otherwise. For non-leaf nodes, the sum of its children. For a whole workflow's, progress is the sum of all its leaf nodes. Warning M will increase during workflow run each time a node is added to the graph.","title":"Workflow Progress"},{"location":"progress/#self-reporting-progress","text":"v3.3 and after Pods in a workflow can report their own progress during their runtime. This self reported progress overrides the auto-generated progress. Reporting progress works as follows: create and write the progress to a file indicated by the env variable ARGO_PROGRESS_FILE format of the progress must be N/M The executor will read this file every 3s and if there was an update, patch the pod annotations with workflows.argoproj.io/progress: N/M . The controller picks this up and writes the progress to the appropriate Status properties. Initially the progress of a workflows' pod is always 0/1 . If you want to influence this, make sure to set an initial progress annotation on the pod: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : progress- spec : entrypoint : main templates : - name : main dag : tasks : - name : progress template : progress - name : progress metadata : annotations : workflows.argoproj.io/progress : 0/100 container : image : alpine:3.14 command : [ \"/bin/sh\" , \"-c\" ] args : - | for i in `seq 1 10`; do sleep 10; echo \"$(($i*10))\"'/100' > $ARGO_PROGRESS_FILE; done","title":"Self reporting progress"},{"location":"public-api/","text":"Public API \u00b6 Argo Workflows public API is defined by the following: The file api/openapi-spec/swagger.json The schema of the table argo_archived_workflows . The installation options.","title":"Public API"},{"location":"public-api/#public-api","text":"Argo Workflows public API is defined by the following: The file api/openapi-spec/swagger.json The schema of the table argo_archived_workflows . The installation options.","title":"Public API"},{"location":"quick-start/","text":"Quick Start \u00b6 To see how Argo Workflows work, you can install it and run examples of simple workflows. Before you start you need a Kubernetes cluster and kubectl set up to be able to access that cluster. For the purposes of getting up and running, a local cluster is fine. You could consider the following local Kubernetes cluster options: minikube kind k3s or k3d Docker Desktop Alternatively, if you want to try out Argo Workflows and don't want to set up a Kubernetes cluster, try the Killercoda course . Development vs. Production These instructions are intended to help you get started quickly. They are not suitable in production. For production installs, please refer to the installation documentation . Install Argo Workflows \u00b6 To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. Below is an example of the install commands, ensure that you update the command to install the correct version number: kubectl create namespace argo kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v<>/install.yaml Patch argo-server authentication \u00b6 The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token in order to authenticate. For more information, refer to the Argo Server Auth Mode documentation . We will switch the authentication mode to server so that we can bypass the UI login for now: kubectl patch deployment \\ argo-server \\ --namespace argo \\ --type = 'json' \\ -p = '[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/args\", \"value\": [ \"server\", \"--auth-mode=server\" ]}]' Port-forward the UI \u00b6 Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 This will serve the UI on https://localhost:2746 . Due to the self-signed certificate, you will receive a TLS error which you will need to manually approve. Pay close attention to the URI. It uses https and not http . Navigating to http://localhost:2746 result in server-side error that breaks the port-forwarding. Install the Argo Workflows CLI \u00b6 You can more easily interact with Argo Workflows with the Argo CLI . Submitting an example workflow \u00b6 Submit an example workflow (CLI) \u00b6 argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml The --watch flag used above will allow you to observe the workflow as it runs and the status of whether it succeeds. When the workflow completes, the watch on the workflow will stop. You can list all the Workflows you have submitted by running the command below: argo list -n argo You will notice the Workflow name has a hello-world- prefix followed by random characters. These characters are used to give Workflows unique names to help identify specific runs of a Workflow. If you submitted this Workflow again, the next Workflow run would have a different name. Using the argo get command, you can always review details of a Workflow run. The output for the command below will be the same as the information shown as when you submitted the Workflow: argo get -n argo @latest The @latest argument to the CLI is a short cut to view the latest Workflow run that was executed. You can also observe the logs of the Workflow run by running the following: argo logs -n argo @latest Submit an example workflow (GUI) \u00b6 Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 Navigate your browser to https://localhost:2746 . Click + Submit New Workflow and then Edit using full workflow options You can find an example workflow already in the text field. Press + Create to start the workflow.","title":"Quick Start"},{"location":"quick-start/#quick-start","text":"To see how Argo Workflows work, you can install it and run examples of simple workflows. Before you start you need a Kubernetes cluster and kubectl set up to be able to access that cluster. For the purposes of getting up and running, a local cluster is fine. You could consider the following local Kubernetes cluster options: minikube kind k3s or k3d Docker Desktop Alternatively, if you want to try out Argo Workflows and don't want to set up a Kubernetes cluster, try the Killercoda course . Development vs. Production These instructions are intended to help you get started quickly. They are not suitable in production. For production installs, please refer to the installation documentation .","title":"Quick Start"},{"location":"quick-start/#install-argo-workflows","text":"To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. Below is an example of the install commands, ensure that you update the command to install the correct version number: kubectl create namespace argo kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v<>/install.yaml","title":"Install Argo Workflows"},{"location":"quick-start/#patch-argo-server-authentication","text":"The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token in order to authenticate. For more information, refer to the Argo Server Auth Mode documentation . We will switch the authentication mode to server so that we can bypass the UI login for now: kubectl patch deployment \\ argo-server \\ --namespace argo \\ --type = 'json' \\ -p = '[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/args\", \"value\": [ \"server\", \"--auth-mode=server\" ]}]'","title":"Patch argo-server authentication"},{"location":"quick-start/#port-forward-the-ui","text":"Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 This will serve the UI on https://localhost:2746 . Due to the self-signed certificate, you will receive a TLS error which you will need to manually approve. Pay close attention to the URI. It uses https and not http . Navigating to http://localhost:2746 result in server-side error that breaks the port-forwarding.","title":"Port-forward the UI"},{"location":"quick-start/#install-the-argo-workflows-cli","text":"You can more easily interact with Argo Workflows with the Argo CLI .","title":"Install the Argo Workflows CLI"},{"location":"quick-start/#submitting-an-example-workflow","text":"","title":"Submitting an example workflow"},{"location":"quick-start/#submit-an-example-workflow-cli","text":"argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml The --watch flag used above will allow you to observe the workflow as it runs and the status of whether it succeeds. When the workflow completes, the watch on the workflow will stop. You can list all the Workflows you have submitted by running the command below: argo list -n argo You will notice the Workflow name has a hello-world- prefix followed by random characters. These characters are used to give Workflows unique names to help identify specific runs of a Workflow. If you submitted this Workflow again, the next Workflow run would have a different name. Using the argo get command, you can always review details of a Workflow run. The output for the command below will be the same as the information shown as when you submitted the Workflow: argo get -n argo @latest The @latest argument to the CLI is a short cut to view the latest Workflow run that was executed. You can also observe the logs of the Workflow run by running the following: argo logs -n argo @latest","title":"Submit an example workflow (CLI)"},{"location":"quick-start/#submit-an-example-workflow-gui","text":"Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 Navigate your browser to https://localhost:2746 . Click + Submit New Workflow and then Edit using full workflow options You can find an example workflow already in the text field. Press + Create to start the workflow.","title":"Submit an example workflow (GUI)"},{"location":"releases/","text":"Releases \u00b6 You can find the most recent version under Github release . Versioning \u00b6 Versions are expressed as x.y.z , where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. Argo Workflows does not use Semantic Versioning. Minor versions may contain breaking changes. Patch versions only contain bug fixes and minor features. For stable , use the latest patch version. \u26a0\ufe0f Read the upgrading guide to find out about breaking changes before any upgrade. Supported Versions \u00b6 We maintain release branches for the most recent two minor releases. Fixes may be back-ported to release branches, depending on severity, risk, and, feasibility. Breaking changes will be documented in upgrading guide . Supported Version Skew \u00b6 Both the argo-server and argocli should be the same version as the controller. Release Cycle \u00b6 New minor versions are released roughly every 6 months. Release candidates (RCs) for major and minor releases are typically available for 4-6 weeks before the release becomes generally available (GA). Features may be shipped in subsequent release candidates. When features are shipped in a new release candidate, the most recent release candidate will be available for at least 2 weeks to ensure it is tested sufficiently before it is pushed to GA. If bugs are found with a feature and are not resolved within the 2 week period, the features will be rolled back so as to be saved for the next major/minor release timeline, and a new release candidate will be cut for testing before pushing to GA. Otherwise, we typically release every two weeks: Patch fixes for the current stable version. The next release candidate, if we are currently in a release-cycle. Kubernetes Compatibility Matrix \u00b6 Argo Workflows \\ Kubernetes 1.17 1.18 1.19 1.20 1.21 1.22 1.23 1.24 1.25 1.26 1.27 3.5 x x x ? ? ? ? ? \u2713 \u2713 \u2713 3.4 x x x ? \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 3.3 ? ? ? ? \u2713 \u2713 \u2713 ? ? ? ? 3.2 ? ? \u2713 \u2713 \u2713 ? ? ? ? ? ? 3.1 \u2713 \u2713 \u2713 ? ? ? ? ? ? ? ? \u2713 Fully supported versions. ? Due to breaking changes might not work. Also, we haven't thoroughly tested against this version. \u2715 Unsupported versions. Notes on Compatibility \u00b6 Argo versions may be compatible with newer and older versions than what it is listed but only three minor versions are supported per Argo release unless otherwise noted. The main branch of Argo Workflows is currently tested on Kubernetes 1.27.","title":"Releases"},{"location":"releases/#releases","text":"You can find the most recent version under Github release .","title":"Releases"},{"location":"releases/#versioning","text":"Versions are expressed as x.y.z , where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. Argo Workflows does not use Semantic Versioning. Minor versions may contain breaking changes. Patch versions only contain bug fixes and minor features. For stable , use the latest patch version. \u26a0\ufe0f Read the upgrading guide to find out about breaking changes before any upgrade.","title":"Versioning"},{"location":"releases/#supported-versions","text":"We maintain release branches for the most recent two minor releases. Fixes may be back-ported to release branches, depending on severity, risk, and, feasibility. Breaking changes will be documented in upgrading guide .","title":"Supported Versions"},{"location":"releases/#supported-version-skew","text":"Both the argo-server and argocli should be the same version as the controller.","title":"Supported Version Skew"},{"location":"releases/#release-cycle","text":"New minor versions are released roughly every 6 months. Release candidates (RCs) for major and minor releases are typically available for 4-6 weeks before the release becomes generally available (GA). Features may be shipped in subsequent release candidates. When features are shipped in a new release candidate, the most recent release candidate will be available for at least 2 weeks to ensure it is tested sufficiently before it is pushed to GA. If bugs are found with a feature and are not resolved within the 2 week period, the features will be rolled back so as to be saved for the next major/minor release timeline, and a new release candidate will be cut for testing before pushing to GA. Otherwise, we typically release every two weeks: Patch fixes for the current stable version. The next release candidate, if we are currently in a release-cycle.","title":"Release Cycle"},{"location":"releases/#kubernetes-compatibility-matrix","text":"Argo Workflows \\ Kubernetes 1.17 1.18 1.19 1.20 1.21 1.22 1.23 1.24 1.25 1.26 1.27 3.5 x x x ? ? ? ? ? \u2713 \u2713 \u2713 3.4 x x x ? \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 3.3 ? ? ? ? \u2713 \u2713 \u2713 ? ? ? ? 3.2 ? ? \u2713 \u2713 \u2713 ? ? ? ? ? ? 3.1 \u2713 \u2713 \u2713 ? ? ? ? ? ? ? ? \u2713 Fully supported versions. ? Due to breaking changes might not work. Also, we haven't thoroughly tested against this version. \u2715 Unsupported versions.","title":"Kubernetes Compatibility Matrix"},{"location":"releases/#notes-on-compatibility","text":"Argo versions may be compatible with newer and older versions than what it is listed but only three minor versions are supported per Argo release unless otherwise noted. The main branch of Argo Workflows is currently tested on Kubernetes 1.27.","title":"Notes on Compatibility"},{"location":"releasing/","text":"Release Instructions \u00b6 Cherry-Picking Fixes \u00b6 \u270b Before you start, make sure you have created a release branch (e.g. release-3.3 ) and it's passing CI. Then get a list of commits you may want to cherry-pick: ./hack/cherry-pick.sh release-3.3 \"fix\" true ./hack/cherry-pick.sh release-3.3 \"chore(deps)\" true ./hack/cherry-pick.sh release-3.3 \"build\" true ./hack/cherry-pick.sh release-3.3 \"ci\" true To automatically cherry-pick, run the following: ./hack/cherry-pick.sh release-3.3 \"fix\" false Then look for \"failed to cherry-pick\" in the log to find commits that fail to be cherry-picked and decide if a manual patch is necessary. Ignore: Fixes for features only on main . Dependency upgrades, unless they fix known security issues. Build or CI improvements, unless the release pipeline is blocked without them. Cherry-pick the first commit. Run make test locally before pushing. If the build timeouts the build caches may have gone, try re-running. Don't cherry-pick another commit until the CI passes. It is harder to find the cause of a new failed build if the last build failed too. Cherry-picking commits one-by-one and then waiting for the CI will take a long time. Instead, cherry-pick each commit then run make test locally before pushing. Publish Release \u00b6 \u270b Before you start, make sure the branch is passing CI. Push a new tag to the release branch. E.g.: git tag v3.3.4 git push upstream v3.3.4 # or origin if you do not use upstream GitHub Actions will automatically build and publish your release. This takes about 1h. Set your self a reminder to check this was successful. Update Changelog \u00b6 Once the tag is published, GitHub Actions will automatically open a PR to update the changelog. Once the PR is ready, you can approve it, enable auto-merge, and then run the following to force trigger the CI build: git branch -D create-pull-request/changelog git fetch upstream git checkout --track upstream/create-pull-request/changelog git commit -s --allow-empty -m \"docs: Force trigger CI\" git push upstream create-pull-request/changelog","title":"Release Instructions"},{"location":"releasing/#release-instructions","text":"","title":"Release Instructions"},{"location":"releasing/#cherry-picking-fixes","text":"\u270b Before you start, make sure you have created a release branch (e.g. release-3.3 ) and it's passing CI. Then get a list of commits you may want to cherry-pick: ./hack/cherry-pick.sh release-3.3 \"fix\" true ./hack/cherry-pick.sh release-3.3 \"chore(deps)\" true ./hack/cherry-pick.sh release-3.3 \"build\" true ./hack/cherry-pick.sh release-3.3 \"ci\" true To automatically cherry-pick, run the following: ./hack/cherry-pick.sh release-3.3 \"fix\" false Then look for \"failed to cherry-pick\" in the log to find commits that fail to be cherry-picked and decide if a manual patch is necessary. Ignore: Fixes for features only on main . Dependency upgrades, unless they fix known security issues. Build or CI improvements, unless the release pipeline is blocked without them. Cherry-pick the first commit. Run make test locally before pushing. If the build timeouts the build caches may have gone, try re-running. Don't cherry-pick another commit until the CI passes. It is harder to find the cause of a new failed build if the last build failed too. Cherry-picking commits one-by-one and then waiting for the CI will take a long time. Instead, cherry-pick each commit then run make test locally before pushing.","title":"Cherry-Picking Fixes"},{"location":"releasing/#publish-release","text":"\u270b Before you start, make sure the branch is passing CI. Push a new tag to the release branch. E.g.: git tag v3.3.4 git push upstream v3.3.4 # or origin if you do not use upstream GitHub Actions will automatically build and publish your release. This takes about 1h. Set your self a reminder to check this was successful.","title":"Publish Release"},{"location":"releasing/#update-changelog","text":"Once the tag is published, GitHub Actions will automatically open a PR to update the changelog. Once the PR is ready, you can approve it, enable auto-merge, and then run the following to force trigger the CI build: git branch -D create-pull-request/changelog git fetch upstream git checkout --track upstream/create-pull-request/changelog git commit -s --allow-empty -m \"docs: Force trigger CI\" git push upstream create-pull-request/changelog","title":"Update Changelog"},{"location":"resource-duration/","text":"Resource Duration \u00b6 v2.7 and after Argo Workflows provides an indication of how much resource your workflow has used and saves this information. This is intended to be an indicative but not accurate value. Calculation \u00b6 The calculation is always an estimate, and is calculated by duration.go based on container duration, specified pod resource requests, limits, or (for memory and CPU) defaults. Each indicator is divided by a common denominator depending on resource type. Base Amounts \u00b6 Each resource type has a denominator used to make large values smaller. CPU: 1 Memory: 100Mi Storage: 10Gi Ephemeral Storage: 10Gi All others: 1 The requested fraction of the base amount will be multiplied by the container's run time to get the container's Resource Duration. For example, if you've requested 50Mi of memory (half of the base amount), and the container runs 120sec, then the reported Resource Duration will be 60sec * (100Mi memory) . Request Defaults \u00b6 If requests are not set for a container, Kubernetes defaults to limits . If limits are not set, Argo falls back to 100m for CPU and 100Mi for memory. Note: these are Argo's defaults, not Kubernetes' defaults. For the most meaningful results, set requests and/or limits for all containers. Example \u00b6 A pod that runs for 3min, with a CPU limit of 2000m , a memory limit of 1Gi and an nvidia.com/gpu resource limit of 1 : CPU: 3min * 2000m / 1000m = 6min * (1 cpu) Memory: 3min * 1Gi / 100Mi = 30min * (100Mi memory) GPU: 3min * 1 / 1 = 3min * (1 nvidia.com/gpu) Web/CLI reporting \u00b6 Both the web and CLI give abbreviated usage, like 9m10s*cpu,6s*memory,2m31s*nvidia.com/gpu . In this context, resources like memory refer to the \"base amounts\". For example, memory means \"amount of time a resource requested 100Mi of memory.\" If a container only uses 10Mi , each second it runs will only count as a tenth-second of memory . Rounding Down \u00b6 For a short running pods (<10s), if the memory request is also small (for example, 10Mi ), then the memory value may be 0s. This is because the denominator is 100Mi .","title":"Resource Duration"},{"location":"resource-duration/#resource-duration","text":"v2.7 and after Argo Workflows provides an indication of how much resource your workflow has used and saves this information. This is intended to be an indicative but not accurate value.","title":"Resource Duration"},{"location":"resource-duration/#calculation","text":"The calculation is always an estimate, and is calculated by duration.go based on container duration, specified pod resource requests, limits, or (for memory and CPU) defaults. Each indicator is divided by a common denominator depending on resource type.","title":"Calculation"},{"location":"resource-duration/#base-amounts","text":"Each resource type has a denominator used to make large values smaller. CPU: 1 Memory: 100Mi Storage: 10Gi Ephemeral Storage: 10Gi All others: 1 The requested fraction of the base amount will be multiplied by the container's run time to get the container's Resource Duration. For example, if you've requested 50Mi of memory (half of the base amount), and the container runs 120sec, then the reported Resource Duration will be 60sec * (100Mi memory) .","title":"Base Amounts"},{"location":"resource-duration/#request-defaults","text":"If requests are not set for a container, Kubernetes defaults to limits . If limits are not set, Argo falls back to 100m for CPU and 100Mi for memory. Note: these are Argo's defaults, not Kubernetes' defaults. For the most meaningful results, set requests and/or limits for all containers.","title":"Request Defaults"},{"location":"resource-duration/#example","text":"A pod that runs for 3min, with a CPU limit of 2000m , a memory limit of 1Gi and an nvidia.com/gpu resource limit of 1 : CPU: 3min * 2000m / 1000m = 6min * (1 cpu) Memory: 3min * 1Gi / 100Mi = 30min * (100Mi memory) GPU: 3min * 1 / 1 = 3min * (1 nvidia.com/gpu)","title":"Example"},{"location":"resource-duration/#webcli-reporting","text":"Both the web and CLI give abbreviated usage, like 9m10s*cpu,6s*memory,2m31s*nvidia.com/gpu . In this context, resources like memory refer to the \"base amounts\". For example, memory means \"amount of time a resource requested 100Mi of memory.\" If a container only uses 10Mi , each second it runs will only count as a tenth-second of memory .","title":"Web/CLI reporting"},{"location":"resource-duration/#rounding-down","text":"For a short running pods (<10s), if the memory request is also small (for example, 10Mi ), then the memory value may be 0s. This is because the denominator is 100Mi .","title":"Rounding Down"},{"location":"resource-template/","text":"Resource Template \u00b6 v2.0 See Kubernetes Resources .","title":"Resource Template"},{"location":"resource-template/#resource-template","text":"v2.0 See Kubernetes Resources .","title":"Resource Template"},{"location":"rest-api/","text":"REST API \u00b6 Argo Server API \u00b6 v2.5 and after Argo Workflows ships with a server that provides more features and security than before. The server can be configured with or without client auth ( server --auth-mode client ). When it is disabled, then clients must pass their KUBECONFIG base 64 encoded in the HTTP Authorization header: ARGO_TOKEN = $( argo auth token ) curl -H \"Authorization: $ARGO_TOKEN \" https://localhost:2746/api/v1/workflows/argo Learn more on how to generate an access token . API reference docs : Latest docs (maybe incorrect) Interactively in the Argo Server UI . (>= v2.10)","title":"REST API"},{"location":"rest-api/#rest-api","text":"","title":"REST API"},{"location":"rest-api/#argo-server-api","text":"v2.5 and after Argo Workflows ships with a server that provides more features and security than before. The server can be configured with or without client auth ( server --auth-mode client ). When it is disabled, then clients must pass their KUBECONFIG base 64 encoded in the HTTP Authorization header: ARGO_TOKEN = $( argo auth token ) curl -H \"Authorization: $ARGO_TOKEN \" https://localhost:2746/api/v1/workflows/argo Learn more on how to generate an access token . API reference docs : Latest docs (maybe incorrect) Interactively in the Argo Server UI . (>= v2.10)","title":"Argo Server API"},{"location":"rest-examples/","text":"API Examples \u00b6 Document contains couple of examples of workflow JSON's to submit via argo-server REST API. v2.5 and after Assuming the namespace of argo-server is argo authentication is turned off (otherwise provide Authorization header) argo-server is available on localhost:2746 Submitting workflow \u00b6 curl --request POST \\ --url https://localhost:2746/api/v1/workflows/argo \\ --header 'content-type: application/json' \\ --data '{ \"namespace\": \"argo\", \"serverDryRun\": false, \"workflow\": { \"metadata\": { \"generateName\": \"hello-world-\", \"namespace\": \"argo\", \"labels\": { \"workflows.argoproj.io/completed\": \"false\" } }, \"spec\": { \"templates\": [ { \"name\": \"whalesay\", \"arguments\": {}, \"inputs\": {}, \"outputs\": {}, \"metadata\": {}, \"container\": { \"name\": \"\", \"image\": \"docker/whalesay:latest\", \"command\": [ \"cowsay\" ], \"args\": [ \"hello world\" ], \"resources\": {} } } ], \"entrypoint\": \"whalesay\", \"arguments\": {} } } }' Getting workflows for namespace argo \u00b6 curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo Getting single workflow for namespace argo \u00b6 curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt Deleting single workflow for namespace argo \u00b6 curl --request DELETE \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt","title":"API Examples"},{"location":"rest-examples/#api-examples","text":"Document contains couple of examples of workflow JSON's to submit via argo-server REST API. v2.5 and after Assuming the namespace of argo-server is argo authentication is turned off (otherwise provide Authorization header) argo-server is available on localhost:2746","title":"API Examples"},{"location":"rest-examples/#submitting-workflow","text":"curl --request POST \\ --url https://localhost:2746/api/v1/workflows/argo \\ --header 'content-type: application/json' \\ --data '{ \"namespace\": \"argo\", \"serverDryRun\": false, \"workflow\": { \"metadata\": { \"generateName\": \"hello-world-\", \"namespace\": \"argo\", \"labels\": { \"workflows.argoproj.io/completed\": \"false\" } }, \"spec\": { \"templates\": [ { \"name\": \"whalesay\", \"arguments\": {}, \"inputs\": {}, \"outputs\": {}, \"metadata\": {}, \"container\": { \"name\": \"\", \"image\": \"docker/whalesay:latest\", \"command\": [ \"cowsay\" ], \"args\": [ \"hello world\" ], \"resources\": {} } } ], \"entrypoint\": \"whalesay\", \"arguments\": {} } } }'","title":"Submitting workflow"},{"location":"rest-examples/#getting-workflows-for-namespace-argo","text":"curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo","title":"Getting workflows for namespace argo"},{"location":"rest-examples/#getting-single-workflow-for-namespace-argo","text":"curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt","title":"Getting single workflow for namespace argo"},{"location":"rest-examples/#deleting-single-workflow-for-namespace-argo","text":"curl --request DELETE \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt","title":"Deleting single workflow for namespace argo"},{"location":"retries/","text":"Retries \u00b6 Argo Workflows offers a range of options for retrying failed steps. Configuring retryStrategy in WorkflowSpec \u00b6 apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-container- spec : entrypoint : retry-container templates : - name : retry-container retryStrategy : limit : \"10\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] The retryPolicy and expression are re-evaluated after each attempt. For example, if you set retryPolicy: OnFailure and your first attempt produces a failure then a retry will be attempted. If the second attempt produces an error, then another attempt will not be made. Retry policies \u00b6 Use retryPolicy to choose which failure types to retry: Always : Retry all failed steps OnFailure : Retry steps whose main container is marked as failed in Kubernetes OnError : Retry steps that encounter Argo controller errors, or whose init or wait containers fail OnTransientError : Retry steps that encounter errors defined as transient , or errors matching the TRANSIENT_ERROR_PATTERN environment variable . Available in version 3.0 and later. The retryPolicy applies even if you also specify an expression , but in version 3.5 or later the default policy means the expression makes the decision unless you explicitly specify a policy. The default retryPolicy is OnFailure , except in version 3.5 or later when an expression is also supplied, when it is Always . This may be easier to understand in this diagram. flowchart LR start([Will a retry be attempted]) start --> policy policy(Policy Specified?) policy-->|No|expressionNoPolicy policy-->|Yes|policyGiven policyGiven(Expression Specified?) policyGiven-->|No|policyGivenApplies policyGiven-->|Yes|policyAndExpression policyGivenApplies(Supplied Policy) policyAndExpression(Supplied Policy AND Expression) expressionNoPolicy(Expression specified?) expressionNoPolicy-->|No|onfailureNoExpr expressionNoPolicy-->|Yes|version onfailureNoExpr[OnFailure] onfailure[OnFailure AND Expression] version(Workflows version) version-->|3.4 or ealier|onfailure always[Only Expression matters] version-->|3.5 or later|always An example retry strategy: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-on-error- spec : entrypoint : error-container templates : - name : error-container retryStrategy : limit : \"2\" retryPolicy : \"Always\" container : image : python command : [ \"python\" , \"-c\" ] # fail with a 80% probability args : [ \"import random; import sys; exit_code = random.choice(range(0, 5)); sys.exit(exit_code)\" ] Conditional retries \u00b6 v3.2 and after You can also use expression to control retries. The expression field accepts an expr expression and has access to the following variables: lastRetry.exitCode : The exit code of the last retry, or \"-1\" if not available lastRetry.status : The phase of the last retry: Error, Failed lastRetry.duration : The duration of the last retry, in seconds lastRetry.message : The message output from the last retry (available from version 3.5) If expression evaluates to false, the step will not be retried. The expression result will be logical and with the retryPolicy . Both must be true to retry. See example for usage. Back-Off \u00b6 You can configure the delay between retries with backoff . See example for usage.","title":"Retries"},{"location":"retries/#retries","text":"Argo Workflows offers a range of options for retrying failed steps.","title":"Retries"},{"location":"retries/#configuring-retrystrategy-in-workflowspec","text":"apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-container- spec : entrypoint : retry-container templates : - name : retry-container retryStrategy : limit : \"10\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] The retryPolicy and expression are re-evaluated after each attempt. For example, if you set retryPolicy: OnFailure and your first attempt produces a failure then a retry will be attempted. If the second attempt produces an error, then another attempt will not be made.","title":"Configuring retryStrategy in WorkflowSpec"},{"location":"retries/#retry-policies","text":"Use retryPolicy to choose which failure types to retry: Always : Retry all failed steps OnFailure : Retry steps whose main container is marked as failed in Kubernetes OnError : Retry steps that encounter Argo controller errors, or whose init or wait containers fail OnTransientError : Retry steps that encounter errors defined as transient , or errors matching the TRANSIENT_ERROR_PATTERN environment variable . Available in version 3.0 and later. The retryPolicy applies even if you also specify an expression , but in version 3.5 or later the default policy means the expression makes the decision unless you explicitly specify a policy. The default retryPolicy is OnFailure , except in version 3.5 or later when an expression is also supplied, when it is Always . This may be easier to understand in this diagram. flowchart LR start([Will a retry be attempted]) start --> policy policy(Policy Specified?) policy-->|No|expressionNoPolicy policy-->|Yes|policyGiven policyGiven(Expression Specified?) policyGiven-->|No|policyGivenApplies policyGiven-->|Yes|policyAndExpression policyGivenApplies(Supplied Policy) policyAndExpression(Supplied Policy AND Expression) expressionNoPolicy(Expression specified?) expressionNoPolicy-->|No|onfailureNoExpr expressionNoPolicy-->|Yes|version onfailureNoExpr[OnFailure] onfailure[OnFailure AND Expression] version(Workflows version) version-->|3.4 or ealier|onfailure always[Only Expression matters] version-->|3.5 or later|always An example retry strategy: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-on-error- spec : entrypoint : error-container templates : - name : error-container retryStrategy : limit : \"2\" retryPolicy : \"Always\" container : image : python command : [ \"python\" , \"-c\" ] # fail with a 80% probability args : [ \"import random; import sys; exit_code = random.choice(range(0, 5)); sys.exit(exit_code)\" ]","title":"Retry policies"},{"location":"retries/#conditional-retries","text":"v3.2 and after You can also use expression to control retries. The expression field accepts an expr expression and has access to the following variables: lastRetry.exitCode : The exit code of the last retry, or \"-1\" if not available lastRetry.status : The phase of the last retry: Error, Failed lastRetry.duration : The duration of the last retry, in seconds lastRetry.message : The message output from the last retry (available from version 3.5) If expression evaluates to false, the step will not be retried. The expression result will be logical and with the retryPolicy . Both must be true to retry. See example for usage.","title":"Conditional retries"},{"location":"retries/#back-off","text":"You can configure the delay between retries with backoff . See example for usage.","title":"Back-Off"},{"location":"roadmap/","text":"Roadmap \u00b6 The roadmap is currently being revamped. If you want to join the discussions, please join our contributors meeting .","title":"Roadmap"},{"location":"roadmap/#roadmap","text":"The roadmap is currently being revamped. If you want to join the discussions, please join our contributors meeting .","title":"Roadmap"},{"location":"running-at-massive-scale/","text":"Running At Massive Scale \u00b6 Argo Workflows is an incredibly scalable tool for orchestrating workflows. It empowers you to process thousands of workflows per day, with each workflow consisting of tens of thousands of nodes. Moreover, it effortlessly handles hundreds of thousands of smaller workflows daily. However, optimizing your setup is crucial to fully leverage this capability. Run The Latest Version \u00b6 You must be running at least v3.1 for several recommendations to work. Upgrade to the very latest patch. Performance fixes often come in patches. Test Your Cluster Before You Install Argo Workflows \u00b6 You'll need a big cluster, with a big Kubernetes master. Users often encounter problems with Kubernetes needing to be configured for the scale. E.g. Kubernetes API server being too small. We recommend you test your cluster to make sure it can run the number of pods they need, even before installing Argo. Create pods at the rate you expect that it'll be created in production. Make sure Kubernetes can keep up with requests to delete pods at the same rate. You'll need to GC data quickly. The less data that Kubernetes and Argo deal with, the less work they need to do. Use pod GC and workflow GC to achieve this. Overwhelmed Kubernetes API \u00b6 Where Argo has a lot of work to do, the Kubernetes API can be overwhelmed. There are several strategies to reduce this: Use the Emissary executor (>= v3.1). This does not make any Kubernetes API requests (except for resources template). Limit the number of concurrent workflows using parallelism. Rate-limit pod creation configuration (>= v3.1). Set DEFAULT_REQUEUE_TIME=1m Overwhelmed Database \u00b6 If you're running workflows with many nodes, you'll probably be offloading data to a database. Offloaded data is kept for 5m. You can reduce the number of records created by setting DEFAULT_REQUEUE_TIME=1m . This will slow reconciliation, but will suit workflows where nodes run for over 1m. Miscellaneous \u00b6 See also Scaling .","title":"Running At Massive Scale"},{"location":"running-at-massive-scale/#running-at-massive-scale","text":"Argo Workflows is an incredibly scalable tool for orchestrating workflows. It empowers you to process thousands of workflows per day, with each workflow consisting of tens of thousands of nodes. Moreover, it effortlessly handles hundreds of thousands of smaller workflows daily. However, optimizing your setup is crucial to fully leverage this capability.","title":"Running At Massive Scale"},{"location":"running-at-massive-scale/#run-the-latest-version","text":"You must be running at least v3.1 for several recommendations to work. Upgrade to the very latest patch. Performance fixes often come in patches.","title":"Run The Latest Version"},{"location":"running-at-massive-scale/#test-your-cluster-before-you-install-argo-workflows","text":"You'll need a big cluster, with a big Kubernetes master. Users often encounter problems with Kubernetes needing to be configured for the scale. E.g. Kubernetes API server being too small. We recommend you test your cluster to make sure it can run the number of pods they need, even before installing Argo. Create pods at the rate you expect that it'll be created in production. Make sure Kubernetes can keep up with requests to delete pods at the same rate. You'll need to GC data quickly. The less data that Kubernetes and Argo deal with, the less work they need to do. Use pod GC and workflow GC to achieve this.","title":"Test Your Cluster Before You Install Argo Workflows"},{"location":"running-at-massive-scale/#overwhelmed-kubernetes-api","text":"Where Argo has a lot of work to do, the Kubernetes API can be overwhelmed. There are several strategies to reduce this: Use the Emissary executor (>= v3.1). This does not make any Kubernetes API requests (except for resources template). Limit the number of concurrent workflows using parallelism. Rate-limit pod creation configuration (>= v3.1). Set DEFAULT_REQUEUE_TIME=1m","title":"Overwhelmed Kubernetes API"},{"location":"running-at-massive-scale/#overwhelmed-database","text":"If you're running workflows with many nodes, you'll probably be offloading data to a database. Offloaded data is kept for 5m. You can reduce the number of records created by setting DEFAULT_REQUEUE_TIME=1m . This will slow reconciliation, but will suit workflows where nodes run for over 1m.","title":"Overwhelmed Database"},{"location":"running-at-massive-scale/#miscellaneous","text":"See also Scaling .","title":"Miscellaneous"},{"location":"running-locally/","text":"Running Locally \u00b6 You have two options: Use the Dev Container . This takes about 7 minutes. This can be used with VSCode, the devcontainer CLI, or GitHub Codespaces. Install the requirements on your computer manually. This takes about 1 hour. Development Container \u00b6 The development container should be able to do everything you need to do to develop Argo Workflows without installing tools on your local machine. It takes quite a long time to build the container. It runs k3d inside the container so you have a cluster to test against. To communicate with services running either in other development containers or directly on the local machine (e.g. a database), the following URL can be used in the workflow spec: host.docker.internal: . This facilitates the implementation of workflows which need to connect to a database or an API server. You can use the development container in a few different ways: Visual Studio Code with Dev Containers extension . Open your argo-workflows folder in VSCode and it should offer to use the development container automatically. VSCode will allow you to forward ports to allow your external browser to access the running components. devcontainer CLI . Once installed, go to your argo-workflows folder and run devcontainer up --workspace-folder . followed by devcontainer exec --workspace-folder . /bin/bash to get a shell where you can build the code. You can use any editor outside the container to edit code; any changes will be mirrored inside the container. Due to a limitation of the CLI, only port 8080 (the Web UI) will be exposed for you to access if you run this way. Other services are usable from the shell inside. GitHub Codespaces . You can start editing as soon as VSCode is open, though you may want to wait for pre-build.sh to finish installing dependencies, building binaries, and setting up the cluster before running any commands in the terminal. Once you start running services (see next steps below), you can click on the \"PORTS\" tab in the VSCode terminal to see all forwarded ports. You can open the Web UI in a new tab from there. Once you have entered the container, continue to Developing Locally . Note: for Apple Silicon This platform can spend 3 times the indicated time Configure Docker Desktop to use BuildKit: \"features\" : { \"buildkit\" : true }, For Windows WSL2 Configure .wslconfig to limit memory usage by the WSL2 to prevent VSCode OOM. For Linux Use Docker Desktop instead of Docker Engine to prevent incorrect network configuration by k3d. Requirements \u00b6 Clone the Git repo into: $GOPATH/src/github.com/argoproj/argo-workflows . Any other path will break the code generation. Add the following to your /etc/hosts : 127.0.0.1 dex 127.0.0.1 minio 127.0.0.1 postgres 127.0.0.1 mysql 127.0.0.1 azurite To build on your own machine without using the Dev Container you will need: Go Yarn Docker protoc node for running the UI A local Kubernetes cluster ( k3d , kind , or minikube ) We recommend using K3D to set up the local Kubernetes cluster since this will allow you to test RBAC set-up and is fast. You can set-up K3D to be part of your default kube config as follows: k3d cluster start --wait Alternatively, you can use Minikube to set up the local Kubernetes cluster. Once a local Kubernetes cluster has started via minikube start , your kube config will use Minikube's context automatically. Warning Do not use Docker Desktop's embedded Kubernetes, it does not support Kubernetes RBAC (i.e. kubectl auth can-i always returns allowed ). Developing locally \u00b6 To start: The controller, so you can run workflows. MinIO ( http://localhost:9000 , use admin/password) so you can use artifacts. Run: make start Make sure you don't see any errors in your terminal. This runs the Workflow Controller locally on your machine (not in Docker/Kubernetes). You can submit a workflow for testing using kubectl : kubectl create -f examples/hello-world.yaml We recommend running make clean before make start to ensure recompilation. If you made changes to the executor, you need to build the image: make argoexec-image To also start the API on http://localhost:2746 : make start API = true This runs the Argo Server (in addition to the Workflow Controller) locally on your machine. To also start the UI on http://localhost:8080 ( UI=true implies API=true ): make start UI = true If you are making change to the CLI (i.e. Argo Server), you can build it separately if you want: make cli ./dist/argo submit examples/hello-world.yaml ; # new CLI is created as `./dist/argo` Although, note that this will be built automatically if you do: make start API=true . To test the workflow archive, use PROFILE=mysql or PROFILE=postgres : make start PROFILE = mysql You'll have, either: Postgres on http://localhost:5432 , run make postgres-cli to access. MySQL on http://localhost:3306 , run make mysql-cli to access. To test SSO integration, use PROFILE=sso : make start UI = true PROFILE = sso Running E2E tests locally \u00b6 Start up Argo Workflows using the following: make start PROFILE = mysql AUTH_MODE = client STATIC_FILES = false API = true If you want to run Azure tests against a local Azurite: kubectl -n $KUBE_NAMESPACE apply -f test/e2e/azure/deploy-azurite.yaml make start Running One Test \u00b6 In most cases, you want to run the test that relates to your changes locally. You should not run all the tests suites. Our CI will run those concurrently when you create a PR, which will give you feedback much faster. Find the test that you want to run in test/e2e make TestArtifactServer Running A Set Of Tests \u00b6 You can find the build tag at the top of the test file. //go:build api You need to run make test-{buildTag} , so for api that would be: make test-api Diagnosing Test Failure \u00b6 Tests often fail: that's good. To diagnose failure: Run kubectl get pods , are pods in the state you expect? Run kubectl get wf , is your workflow in the state you expect? What do the pod logs say? I.e. kubectl logs . Check the controller and argo-server logs. These are printed to the console you ran make start in. Is anything logged at level=error ? If tests run slowly or time out, factory reset your Kubernetes cluster. Committing \u00b6 Before you commit code and raise a PR, always run: make pre-commit -B Please do the following when creating your PR: Sign-off your commits. Use Conventional Commit messages . Suffix the issue number. Examples: git commit --signoff -m 'fix: Fixed broken thing. Fixes #1234' git commit --signoff -m 'feat: Added a new feature. Fixes #1234' Troubleshooting \u00b6 When running make pre-commit -B , if you encounter errors like make: *** [pkg/apiclient/clusterworkflowtemplate/cluster-workflow-template.swagger.json] Error 1 , ensure that you have checked out your code into $GOPATH/src/github.com/argoproj/argo-workflows . If you encounter \"out of heap\" issues when building UI through Docker, please validate resources allocated to Docker. Compilation may fail if allocated RAM is less than 4Gi. To start profiling with pprof , pass ARGO_PPROF=true when starting the controller locally. Then run the following: go tool pprof http://localhost:6060/debug/pprof/profile # 30-second CPU profile go tool pprof http://localhost:6060/debug/pprof/heap # heap profile go tool pprof http://localhost:6060/debug/pprof/block # goroutine blocking profile Using Multiple Terminals \u00b6 I run the controller in one terminal, and the UI in another. I like the UI: it is much faster to debug workflows than the terminal. This allows you to make changes to the controller and re-start it, without restarting the UI (which I think takes too long to start-up). As a convenience, CTRL=false implies UI=true , so just run: make start CTRL = false","title":"Running Locally"},{"location":"running-locally/#running-locally","text":"You have two options: Use the Dev Container . This takes about 7 minutes. This can be used with VSCode, the devcontainer CLI, or GitHub Codespaces. Install the requirements on your computer manually. This takes about 1 hour.","title":"Running Locally"},{"location":"running-locally/#development-container","text":"The development container should be able to do everything you need to do to develop Argo Workflows without installing tools on your local machine. It takes quite a long time to build the container. It runs k3d inside the container so you have a cluster to test against. To communicate with services running either in other development containers or directly on the local machine (e.g. a database), the following URL can be used in the workflow spec: host.docker.internal: . This facilitates the implementation of workflows which need to connect to a database or an API server. You can use the development container in a few different ways: Visual Studio Code with Dev Containers extension . Open your argo-workflows folder in VSCode and it should offer to use the development container automatically. VSCode will allow you to forward ports to allow your external browser to access the running components. devcontainer CLI . Once installed, go to your argo-workflows folder and run devcontainer up --workspace-folder . followed by devcontainer exec --workspace-folder . /bin/bash to get a shell where you can build the code. You can use any editor outside the container to edit code; any changes will be mirrored inside the container. Due to a limitation of the CLI, only port 8080 (the Web UI) will be exposed for you to access if you run this way. Other services are usable from the shell inside. GitHub Codespaces . You can start editing as soon as VSCode is open, though you may want to wait for pre-build.sh to finish installing dependencies, building binaries, and setting up the cluster before running any commands in the terminal. Once you start running services (see next steps below), you can click on the \"PORTS\" tab in the VSCode terminal to see all forwarded ports. You can open the Web UI in a new tab from there. Once you have entered the container, continue to Developing Locally . Note: for Apple Silicon This platform can spend 3 times the indicated time Configure Docker Desktop to use BuildKit: \"features\" : { \"buildkit\" : true }, For Windows WSL2 Configure .wslconfig to limit memory usage by the WSL2 to prevent VSCode OOM. For Linux Use Docker Desktop instead of Docker Engine to prevent incorrect network configuration by k3d.","title":"Development Container"},{"location":"running-locally/#requirements","text":"Clone the Git repo into: $GOPATH/src/github.com/argoproj/argo-workflows . Any other path will break the code generation. Add the following to your /etc/hosts : 127.0.0.1 dex 127.0.0.1 minio 127.0.0.1 postgres 127.0.0.1 mysql 127.0.0.1 azurite To build on your own machine without using the Dev Container you will need: Go Yarn Docker protoc node for running the UI A local Kubernetes cluster ( k3d , kind , or minikube ) We recommend using K3D to set up the local Kubernetes cluster since this will allow you to test RBAC set-up and is fast. You can set-up K3D to be part of your default kube config as follows: k3d cluster start --wait Alternatively, you can use Minikube to set up the local Kubernetes cluster. Once a local Kubernetes cluster has started via minikube start , your kube config will use Minikube's context automatically. Warning Do not use Docker Desktop's embedded Kubernetes, it does not support Kubernetes RBAC (i.e. kubectl auth can-i always returns allowed ).","title":"Requirements"},{"location":"running-locally/#developing-locally","text":"To start: The controller, so you can run workflows. MinIO ( http://localhost:9000 , use admin/password) so you can use artifacts. Run: make start Make sure you don't see any errors in your terminal. This runs the Workflow Controller locally on your machine (not in Docker/Kubernetes). You can submit a workflow for testing using kubectl : kubectl create -f examples/hello-world.yaml We recommend running make clean before make start to ensure recompilation. If you made changes to the executor, you need to build the image: make argoexec-image To also start the API on http://localhost:2746 : make start API = true This runs the Argo Server (in addition to the Workflow Controller) locally on your machine. To also start the UI on http://localhost:8080 ( UI=true implies API=true ): make start UI = true If you are making change to the CLI (i.e. Argo Server), you can build it separately if you want: make cli ./dist/argo submit examples/hello-world.yaml ; # new CLI is created as `./dist/argo` Although, note that this will be built automatically if you do: make start API=true . To test the workflow archive, use PROFILE=mysql or PROFILE=postgres : make start PROFILE = mysql You'll have, either: Postgres on http://localhost:5432 , run make postgres-cli to access. MySQL on http://localhost:3306 , run make mysql-cli to access. To test SSO integration, use PROFILE=sso : make start UI = true PROFILE = sso","title":"Developing locally"},{"location":"running-locally/#running-e2e-tests-locally","text":"Start up Argo Workflows using the following: make start PROFILE = mysql AUTH_MODE = client STATIC_FILES = false API = true If you want to run Azure tests against a local Azurite: kubectl -n $KUBE_NAMESPACE apply -f test/e2e/azure/deploy-azurite.yaml make start","title":"Running E2E tests locally"},{"location":"running-locally/#running-one-test","text":"In most cases, you want to run the test that relates to your changes locally. You should not run all the tests suites. Our CI will run those concurrently when you create a PR, which will give you feedback much faster. Find the test that you want to run in test/e2e make TestArtifactServer","title":"Running One Test"},{"location":"running-locally/#running-a-set-of-tests","text":"You can find the build tag at the top of the test file. //go:build api You need to run make test-{buildTag} , so for api that would be: make test-api","title":"Running A Set Of Tests"},{"location":"running-locally/#diagnosing-test-failure","text":"Tests often fail: that's good. To diagnose failure: Run kubectl get pods , are pods in the state you expect? Run kubectl get wf , is your workflow in the state you expect? What do the pod logs say? I.e. kubectl logs . Check the controller and argo-server logs. These are printed to the console you ran make start in. Is anything logged at level=error ? If tests run slowly or time out, factory reset your Kubernetes cluster.","title":"Diagnosing Test Failure"},{"location":"running-locally/#committing","text":"Before you commit code and raise a PR, always run: make pre-commit -B Please do the following when creating your PR: Sign-off your commits. Use Conventional Commit messages . Suffix the issue number. Examples: git commit --signoff -m 'fix: Fixed broken thing. Fixes #1234' git commit --signoff -m 'feat: Added a new feature. Fixes #1234'","title":"Committing"},{"location":"running-locally/#troubleshooting","text":"When running make pre-commit -B , if you encounter errors like make: *** [pkg/apiclient/clusterworkflowtemplate/cluster-workflow-template.swagger.json] Error 1 , ensure that you have checked out your code into $GOPATH/src/github.com/argoproj/argo-workflows . If you encounter \"out of heap\" issues when building UI through Docker, please validate resources allocated to Docker. Compilation may fail if allocated RAM is less than 4Gi. To start profiling with pprof , pass ARGO_PPROF=true when starting the controller locally. Then run the following: go tool pprof http://localhost:6060/debug/pprof/profile # 30-second CPU profile go tool pprof http://localhost:6060/debug/pprof/heap # heap profile go tool pprof http://localhost:6060/debug/pprof/block # goroutine blocking profile","title":"Troubleshooting"},{"location":"running-locally/#using-multiple-terminals","text":"I run the controller in one terminal, and the UI in another. I like the UI: it is much faster to debug workflows than the terminal. This allows you to make changes to the controller and re-start it, without restarting the UI (which I think takes too long to start-up). As a convenience, CTRL=false implies UI=true , so just run: make start CTRL = false","title":"Using Multiple Terminals"},{"location":"running-nix/","text":"Try Argo using Nix \u00b6 Nix is a package manager / build tool which focuses on reproducible build environments. Argo Workflows has some basic support for Nix which is enough to get Argo Workflows up and running with minimal effort. Here are the steps to follow: Modify your hosts file and set up a Kubernetes cluster according to Running Locally . Don't worry about the other instructions. Install Nix . Run nix develop --extra-experimental-features nix-command --extra-experimental-features flakes ./dev/nix/ --impure (you can add the extra features as a default in your nix.conf file). Run devenv up . Warning \u00b6 This is still bare-bones at the moment, any feature in the Makefile not mentioned here is excluded for now. In practice, this means that only a make start UI=true equivalent is supported at the moment. As an additional caveat, there are no LDFlags set in the build; as a result the UI will show 0.0.0-unknown for the version. How do I upgrade a dependency? \u00b6 Most dependencies are in the Nix packages repository but if you want a specific version, you might have to build it yourself. This is fairly trivial in Nix, the idea is to just change the version string to whatever package you are concerned about. Changing a python dependency version \u00b6 If we look at the mkdocs dependency, we see a call to buildPythonPackage , to change the version we need to just modify the version string. Doing this will display a failure because the hash from the fetchPypi command will now differ, it will also display the correct hash, copy this hash and replace the existing hash value. Changing a go dependency version \u00b6 The almost exact same principles apply here, the only difference being you must change the vendorHash and the sha256 fields. The vendorHash is a hash of the vendored dependencies while the sha256 is for the sources fetched from the fetchFromGithub call. Why am I getting a vendorSha256 mismatch ? \u00b6 Unfortunately, dependabot is not capable of upgrading flakes automatically, when the go modules are automatically upgraded the hash of the vendor dependencies changes but this change isn't automatically reflected in the nix file. The vendorSha256 field that needs to be upgraded can be found by searching for ${package.name} = pkgs.buildGoModule in the nix file.","title":"Try Argo using Nix"},{"location":"running-nix/#try-argo-using-nix","text":"Nix is a package manager / build tool which focuses on reproducible build environments. Argo Workflows has some basic support for Nix which is enough to get Argo Workflows up and running with minimal effort. Here are the steps to follow: Modify your hosts file and set up a Kubernetes cluster according to Running Locally . Don't worry about the other instructions. Install Nix . Run nix develop --extra-experimental-features nix-command --extra-experimental-features flakes ./dev/nix/ --impure (you can add the extra features as a default in your nix.conf file). Run devenv up .","title":"Try Argo using Nix"},{"location":"running-nix/#warning","text":"This is still bare-bones at the moment, any feature in the Makefile not mentioned here is excluded for now. In practice, this means that only a make start UI=true equivalent is supported at the moment. As an additional caveat, there are no LDFlags set in the build; as a result the UI will show 0.0.0-unknown for the version.","title":"Warning"},{"location":"running-nix/#how-do-i-upgrade-a-dependency","text":"Most dependencies are in the Nix packages repository but if you want a specific version, you might have to build it yourself. This is fairly trivial in Nix, the idea is to just change the version string to whatever package you are concerned about.","title":"How do I upgrade a dependency?"},{"location":"running-nix/#changing-a-python-dependency-version","text":"If we look at the mkdocs dependency, we see a call to buildPythonPackage , to change the version we need to just modify the version string. Doing this will display a failure because the hash from the fetchPypi command will now differ, it will also display the correct hash, copy this hash and replace the existing hash value.","title":"Changing a python dependency version"},{"location":"running-nix/#changing-a-go-dependency-version","text":"The almost exact same principles apply here, the only difference being you must change the vendorHash and the sha256 fields. The vendorHash is a hash of the vendored dependencies while the sha256 is for the sources fetched from the fetchFromGithub call.","title":"Changing a go dependency version"},{"location":"running-nix/#why-am-i-getting-a-vendorsha256-mismatch","text":"Unfortunately, dependabot is not capable of upgrading flakes automatically, when the go modules are automatically upgraded the hash of the vendor dependencies changes but this change isn't automatically reflected in the nix file. The vendorSha256 field that needs to be upgraded can be found by searching for ${package.name} = pkgs.buildGoModule in the nix file.","title":"Why am I getting a vendorSha256 mismatch ?"},{"location":"scaling/","text":"Scaling \u00b6 For running large workflows, you'll typically need to scale the controller to match. Horizontally Scaling \u00b6 You cannot horizontally scale the controller. v3.0 As of v3.0, the controller supports having a hot-standby for High Availability . Vertically Scaling \u00b6 You can scale the controller vertically in these ways: Container Resource Requests \u00b6 If you observe the Controller using its total CPU or memory requests, you should increase those. Adding Goroutines to Increase Concurrency \u00b6 If you have sufficient CPU cores, you can take advantage of them with more goroutines: If you have many Workflows and you notice they're not being reconciled fast enough, increase --workflow-workers . If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase --workflow-ttl-workers . If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase --pod-cleanup-workers . v3.5 and after If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase --cron-workflow-workers . K8S API Client Side Rate Limiting \u00b6 The K8S client library rate limits the messages that can go out. If you frequently see messages similar to this in the Controller log (issued by the library): Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Or, in >= v3.5, if you see warnings similar to this (could be any CR, not just WorkflowTemplate ): Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Then, if your K8S API Server can handle more requests: Increase both --qps and --burst arguments for the Controller. The qps value indicates the average number of queries per second allowed by the K8S Client. The burst value is the number of queries/sec the Client receives before it starts enforcing qps , so typically burst > qps . If not set, the default values are qps=20 and burst=30 (as of v3.5 (refer to cmd/workflow-controller/main.go in case the values change)). Sharding \u00b6 One Install Per Namespace \u00b6 Rather than running a single installation in your cluster, run one per namespace using the --namespaced flag. Instance ID \u00b6 Within a cluster can use instance ID to run N Argo instances within a cluster. Create one namespace for each Argo, e.g. argo-i1 , argo-i2 :. Edit workflow-controller-configmap.yaml for each namespace to set an instance ID. apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : instanceID : i1 v2.9 and after You may need to pass the instance ID to the CLI: argo --instanceid i1 submit my-wf.yaml You do not need to have one instance ID per namespace, you could have many or few. Maximum Recursion Depth \u00b6 In order to protect users against infinite recursion, the controller has a default maximum recursion depth of 100 calls to templates. This protection can be disabled with the environment variable DISABLE_MAX_RECURSION=true Miscellaneous \u00b6 See also Running At Massive Scale .","title":"Scaling"},{"location":"scaling/#scaling","text":"For running large workflows, you'll typically need to scale the controller to match.","title":"Scaling"},{"location":"scaling/#horizontally-scaling","text":"You cannot horizontally scale the controller. v3.0 As of v3.0, the controller supports having a hot-standby for High Availability .","title":"Horizontally Scaling"},{"location":"scaling/#vertically-scaling","text":"You can scale the controller vertically in these ways:","title":"Vertically Scaling"},{"location":"scaling/#container-resource-requests","text":"If you observe the Controller using its total CPU or memory requests, you should increase those.","title":"Container Resource Requests"},{"location":"scaling/#adding-goroutines-to-increase-concurrency","text":"If you have sufficient CPU cores, you can take advantage of them with more goroutines: If you have many Workflows and you notice they're not being reconciled fast enough, increase --workflow-workers . If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase --workflow-ttl-workers . If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase --pod-cleanup-workers . v3.5 and after If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase --cron-workflow-workers .","title":"Adding Goroutines to Increase Concurrency"},{"location":"scaling/#k8s-api-client-side-rate-limiting","text":"The K8S client library rate limits the messages that can go out. If you frequently see messages similar to this in the Controller log (issued by the library): Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Or, in >= v3.5, if you see warnings similar to this (could be any CR, not just WorkflowTemplate ): Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Then, if your K8S API Server can handle more requests: Increase both --qps and --burst arguments for the Controller. The qps value indicates the average number of queries per second allowed by the K8S Client. The burst value is the number of queries/sec the Client receives before it starts enforcing qps , so typically burst > qps . If not set, the default values are qps=20 and burst=30 (as of v3.5 (refer to cmd/workflow-controller/main.go in case the values change)).","title":"K8S API Client Side Rate Limiting"},{"location":"scaling/#sharding","text":"","title":"Sharding"},{"location":"scaling/#one-install-per-namespace","text":"Rather than running a single installation in your cluster, run one per namespace using the --namespaced flag.","title":"One Install Per Namespace"},{"location":"scaling/#instance-id","text":"Within a cluster can use instance ID to run N Argo instances within a cluster. Create one namespace for each Argo, e.g. argo-i1 , argo-i2 :. Edit workflow-controller-configmap.yaml for each namespace to set an instance ID. apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : instanceID : i1 v2.9 and after You may need to pass the instance ID to the CLI: argo --instanceid i1 submit my-wf.yaml You do not need to have one instance ID per namespace, you could have many or few.","title":"Instance ID"},{"location":"scaling/#maximum-recursion-depth","text":"In order to protect users against infinite recursion, the controller has a default maximum recursion depth of 100 calls to templates. This protection can be disabled with the environment variable DISABLE_MAX_RECURSION=true","title":"Maximum Recursion Depth"},{"location":"scaling/#miscellaneous","text":"See also Running At Massive Scale .","title":"Miscellaneous"},{"location":"security/","text":"Security \u00b6 To report security issues . \ud83d\udca1 Read Practical Argo Workflows Hardening . Workflow Controller Security \u00b6 This has three parts. Controller Permissions \u00b6 The controller has permission (via Kubernetes RBAC + its config map) with either all namespaces (cluster-scope install) or a single managed namespace (namespace-install), notably: List/get/update workflows, and cron-workflows. Create/get/delete pods, PVCs, and PDBs. List/get template, config maps, service accounts, and secrets. See workflow-controller-cluster-role.yaml or workflow-controller-role.yaml User Permissions \u00b6 Users minimally need permission to create/read workflows. The controller will then create workflow pods (config maps etc) on behalf of the users, even if the user does not have permission to do this themselves. The controller will only create workflow pods in the workflow's namespace. A way to think of this is that, if the user has permission to create a workflow in a namespace, then it is OK to create pods or anything else for them in that namespace. If the user only has permission to create workflows, then they will be typically unable to configure other necessary resources such as config maps, or view the outcome of their workflow. This is useful when the user is a service. Warning If you allow users to create workflows in the controller's namespace (typically argo ), it may be possible for users to modify the controller itself. In a namespace-install the managed namespace should therefore not be the controller's namespace. You can typically further restrict what a user can do to just being able to submit workflows from templates using the workflow restrictions feature . UI Access \u00b6 If you want a user to have read-only access to the entirety of the Argo UI for their namespace, a sample role for them may look like: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : ui-user-read-only rules : # k8s standard APIs - apiGroups : - \"\" resources : - events - pods - pods/log verbs : - get - list - watch # Argo APIs. See also https://github.com/argoproj/argo-workflows/blob/main/manifests/cluster-install/workflow-controller-rbac/workflow-aggregate-roles.yaml#L4 - apiGroups : - argoproj.io resources : - eventsources - sensors - workflows - workfloweventbindings - workflowtemplates - clusterworkflowtemplates - cronworkflows - cronworkflows - workflowtaskresults verbs : - get - list - watch Workflow Pod Permissions \u00b6 Workflow pods run using either: The default service account. The service account declared in the workflow spec. There is no restriction on which service account in a namespace may be used. This service account typically needs permissions . Different service accounts should be used if a workflow pod needs to have elevated permissions, e.g. to create other resources. The main container will have the service account token mounted, allowing the main container to patch pods (among other permissions). Set automountServiceAccountToken to false to prevent this. See fields . By default, workflows pods run as root . To further secure workflow pods, set the workflow pod security context . You should configure the controller with the correct workflow executor for your trade off between security and scalability. These settings can be set by default using workflow defaults . Argo Server Security \u00b6 Argo Server implements security in three layers. Firstly, you should enable transport layer security to ensure your data cannot be read in transit. Secondly, you should enable an authentication mode to ensure that you do not run workflows from unknown users. Finally, you should configure the argo-server role and role binding with the correct permissions. Read-Only \u00b6 You can achieve this by configuring the argo-server role ( example with only read access (i.e. only get / list / watch verbs)). Network Security \u00b6 Argo Workflows requires various levels of network access depending on configuration and the features enabled. The following describes the different workflow components and their network access needs, to help provide guidance on how to configure the argo namespace in a secure manner (e.g. NetworkPolicy ). Argo Server \u00b6 The Argo Server is commonly exposed to end-users to provide users with a UI for visualizing and managing their workflows. It must also be exposed if leveraging webhooks to trigger workflows. Both of these use cases require that the argo-server Service to be exposed for ingress traffic (e.g. with an Ingress object or load balancer). Note that the Argo UI is also available to be accessed by running the server locally (i.e. argo server ) using local KUBECONFIG credentials, and visiting the UI over https://localhost:2746 . The Argo Server additionally has a feature to allow downloading of artifacts through the UI. This feature requires that the argo-server be given egress access to the underlying artifact provider (e.g. S3, GCS, MinIO, Artifactory, Azure Blob Storage) in order to download and stream the artifact. Workflow Controller \u00b6 The workflow-controller Deployment exposes a Prometheus metrics endpoint (workflow-controller-metrics:9090) so that a Prometheus server can periodically scrape for controller level metrics. Since Prometheus is typically running in a separate namespace, the argo namespace should be configured to allow cross-namespace ingress access to the workflow-controller-metrics Service. Database access \u00b6 A persistent store can be configured for either archiving or offloading workflows. If either of these features are enabled, both the workflow-controller and argo-server Deployments will need egress network access to the external database used for archiving/offloading.","title":"Security"},{"location":"security/#security","text":"To report security issues . \ud83d\udca1 Read Practical Argo Workflows Hardening .","title":"Security"},{"location":"security/#workflow-controller-security","text":"This has three parts.","title":"Workflow Controller Security"},{"location":"security/#controller-permissions","text":"The controller has permission (via Kubernetes RBAC + its config map) with either all namespaces (cluster-scope install) or a single managed namespace (namespace-install), notably: List/get/update workflows, and cron-workflows. Create/get/delete pods, PVCs, and PDBs. List/get template, config maps, service accounts, and secrets. See workflow-controller-cluster-role.yaml or workflow-controller-role.yaml","title":"Controller Permissions"},{"location":"security/#user-permissions","text":"Users minimally need permission to create/read workflows. The controller will then create workflow pods (config maps etc) on behalf of the users, even if the user does not have permission to do this themselves. The controller will only create workflow pods in the workflow's namespace. A way to think of this is that, if the user has permission to create a workflow in a namespace, then it is OK to create pods or anything else for them in that namespace. If the user only has permission to create workflows, then they will be typically unable to configure other necessary resources such as config maps, or view the outcome of their workflow. This is useful when the user is a service. Warning If you allow users to create workflows in the controller's namespace (typically argo ), it may be possible for users to modify the controller itself. In a namespace-install the managed namespace should therefore not be the controller's namespace. You can typically further restrict what a user can do to just being able to submit workflows from templates using the workflow restrictions feature .","title":"User Permissions"},{"location":"security/#ui-access","text":"If you want a user to have read-only access to the entirety of the Argo UI for their namespace, a sample role for them may look like: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : ui-user-read-only rules : # k8s standard APIs - apiGroups : - \"\" resources : - events - pods - pods/log verbs : - get - list - watch # Argo APIs. See also https://github.com/argoproj/argo-workflows/blob/main/manifests/cluster-install/workflow-controller-rbac/workflow-aggregate-roles.yaml#L4 - apiGroups : - argoproj.io resources : - eventsources - sensors - workflows - workfloweventbindings - workflowtemplates - clusterworkflowtemplates - cronworkflows - cronworkflows - workflowtaskresults verbs : - get - list - watch","title":"UI Access"},{"location":"security/#workflow-pod-permissions","text":"Workflow pods run using either: The default service account. The service account declared in the workflow spec. There is no restriction on which service account in a namespace may be used. This service account typically needs permissions . Different service accounts should be used if a workflow pod needs to have elevated permissions, e.g. to create other resources. The main container will have the service account token mounted, allowing the main container to patch pods (among other permissions). Set automountServiceAccountToken to false to prevent this. See fields . By default, workflows pods run as root . To further secure workflow pods, set the workflow pod security context . You should configure the controller with the correct workflow executor for your trade off between security and scalability. These settings can be set by default using workflow defaults .","title":"Workflow Pod Permissions"},{"location":"security/#argo-server-security","text":"Argo Server implements security in three layers. Firstly, you should enable transport layer security to ensure your data cannot be read in transit. Secondly, you should enable an authentication mode to ensure that you do not run workflows from unknown users. Finally, you should configure the argo-server role and role binding with the correct permissions.","title":"Argo Server Security"},{"location":"security/#read-only","text":"You can achieve this by configuring the argo-server role ( example with only read access (i.e. only get / list / watch verbs)).","title":"Read-Only"},{"location":"security/#network-security","text":"Argo Workflows requires various levels of network access depending on configuration and the features enabled. The following describes the different workflow components and their network access needs, to help provide guidance on how to configure the argo namespace in a secure manner (e.g. NetworkPolicy ).","title":"Network Security"},{"location":"security/#argo-server","text":"The Argo Server is commonly exposed to end-users to provide users with a UI for visualizing and managing their workflows. It must also be exposed if leveraging webhooks to trigger workflows. Both of these use cases require that the argo-server Service to be exposed for ingress traffic (e.g. with an Ingress object or load balancer). Note that the Argo UI is also available to be accessed by running the server locally (i.e. argo server ) using local KUBECONFIG credentials, and visiting the UI over https://localhost:2746 . The Argo Server additionally has a feature to allow downloading of artifacts through the UI. This feature requires that the argo-server be given egress access to the underlying artifact provider (e.g. S3, GCS, MinIO, Artifactory, Azure Blob Storage) in order to download and stream the artifact.","title":"Argo Server"},{"location":"security/#workflow-controller","text":"The workflow-controller Deployment exposes a Prometheus metrics endpoint (workflow-controller-metrics:9090) so that a Prometheus server can periodically scrape for controller level metrics. Since Prometheus is typically running in a separate namespace, the argo namespace should be configured to allow cross-namespace ingress access to the workflow-controller-metrics Service.","title":"Workflow Controller"},{"location":"security/#database-access","text":"A persistent store can be configured for either archiving or offloading workflows. If either of these features are enabled, both the workflow-controller and argo-server Deployments will need egress network access to the external database used for archiving/offloading.","title":"Database access"},{"location":"service-accounts/","text":"Service Accounts \u00b6 Configure the service account to run Workflows \u00b6 Roles, Role-Bindings, and Service Accounts \u00b6 In order for Argo to support features such as artifacts, outputs, access to secrets, etc. it needs to communicate with Kubernetes resources using the Kubernetes API. To communicate with the Kubernetes API, Argo uses a ServiceAccount to authenticate itself to the Kubernetes API. You can specify which Role (i.e. which permissions) the ServiceAccount that Argo uses by binding a Role to a ServiceAccount using a RoleBinding Then, when submitting Workflows you can specify which ServiceAccount Argo uses using: argo submit --serviceaccount When no ServiceAccount is provided, Argo will use the default ServiceAccount from the namespace from which it is run, which will almost always have insufficient privileges by default. For more information about granting Argo the necessary permissions for your use case see Workflow RBAC . Granting admin privileges \u00b6 For the purposes of this demo, we will grant the default ServiceAccount admin privileges (i.e., we will bind the admin Role to the default ServiceAccount of the current namespace): kubectl create rolebinding default-admin --clusterrole = admin --serviceaccount = argo:default -n argo Note that this will grant admin privileges to the default ServiceAccount in the namespace that the command is run from, so you will only be able to run Workflows in the namespace where the RoleBinding was made.","title":"Service Accounts"},{"location":"service-accounts/#service-accounts","text":"","title":"Service Accounts"},{"location":"service-accounts/#configure-the-service-account-to-run-workflows","text":"","title":"Configure the service account to run Workflows"},{"location":"service-accounts/#roles-role-bindings-and-service-accounts","text":"In order for Argo to support features such as artifacts, outputs, access to secrets, etc. it needs to communicate with Kubernetes resources using the Kubernetes API. To communicate with the Kubernetes API, Argo uses a ServiceAccount to authenticate itself to the Kubernetes API. You can specify which Role (i.e. which permissions) the ServiceAccount that Argo uses by binding a Role to a ServiceAccount using a RoleBinding Then, when submitting Workflows you can specify which ServiceAccount Argo uses using: argo submit --serviceaccount When no ServiceAccount is provided, Argo will use the default ServiceAccount from the namespace from which it is run, which will almost always have insufficient privileges by default. For more information about granting Argo the necessary permissions for your use case see Workflow RBAC .","title":"Roles, Role-Bindings, and Service Accounts"},{"location":"service-accounts/#granting-admin-privileges","text":"For the purposes of this demo, we will grant the default ServiceAccount admin privileges (i.e., we will bind the admin Role to the default ServiceAccount of the current namespace): kubectl create rolebinding default-admin --clusterrole = admin --serviceaccount = argo:default -n argo Note that this will grant admin privileges to the default ServiceAccount in the namespace that the command is run from, so you will only be able to run Workflows in the namespace where the RoleBinding was made.","title":"Granting admin privileges"},{"location":"sidecar-injection/","text":"Sidecar Injection \u00b6 Automatic (i.e. mutating webhook based) sidecar injection systems, including service meshes such as Anthos and Istio Proxy, create a unique problem for Kubernetes workloads that run to completion. Because sidecars are injected outside of the view of the workflow controller, the controller has no awareness of them. It has no opportunity to rewrite the containers command (when using the Emissary Executor) and as the sidecar's process will run as PID 1, which is protected. It can be impossible for the wait container to terminate the sidecar. You will minimize problems by not using Istio with Argo Workflows. See #1282 . Support Matrix \u00b6 Key: Unsupported - this executor is no longer supported Any - we can kill any image KubectlExec - we kill images by running kubectl exec Executor Sidecar Injected Sidecar docker Any Unsupported emissary Any KubectlExec k8sapi Shell KubectlExec kubelet Shell KubectlExec pns Any Any How We Kill Sidecars Using kubectl exec \u00b6 v3.1 and after Kubernetes does not provide a way to kill a single container. You can delete a pod, but this kills all containers, and loses all information and logs of that pod. Instead, try to mimic the Kubernetes termination behavior, which is: SIGTERM PID 1 Wait for the pod's terminateGracePeriodSeconds (30s by default). SIGKILL PID 1 The following are not supported: preStop STOPSIGNAL To do this, it must be possible to run a kubectl exec command that kills the injected sidecar. By default it runs /bin/sh -c 'kill 1' . This can fail: No /bin/sh . Process is not running as PID 1 (which is becoming the default these days due to runAsNonRoot ). Process does not correctly respond to kill 1 (e.g. some shell script weirdness). You can override the kill command by using a pod annotation (where %d is the signal number), for example: spec : podMetadata : annotations : workflows.argoproj.io/kill-cmd-istio-proxy : '[\"pilot-agent\", \"request\", \"POST\", \"quitquitquit\"]' workflows.argoproj.io/kill-cmd-vault-agent : '[\"sh\", \"-c\", \"kill -%d 1\"]' workflows.argoproj.io/kill-cmd-sidecar : '[\"sh\", \"-c\", \"kill -%d $(pidof entrypoint.sh)\"]'","title":"Sidecar Injection"},{"location":"sidecar-injection/#sidecar-injection","text":"Automatic (i.e. mutating webhook based) sidecar injection systems, including service meshes such as Anthos and Istio Proxy, create a unique problem for Kubernetes workloads that run to completion. Because sidecars are injected outside of the view of the workflow controller, the controller has no awareness of them. It has no opportunity to rewrite the containers command (when using the Emissary Executor) and as the sidecar's process will run as PID 1, which is protected. It can be impossible for the wait container to terminate the sidecar. You will minimize problems by not using Istio with Argo Workflows. See #1282 .","title":"Sidecar Injection"},{"location":"sidecar-injection/#support-matrix","text":"Key: Unsupported - this executor is no longer supported Any - we can kill any image KubectlExec - we kill images by running kubectl exec Executor Sidecar Injected Sidecar docker Any Unsupported emissary Any KubectlExec k8sapi Shell KubectlExec kubelet Shell KubectlExec pns Any Any","title":"Support Matrix"},{"location":"sidecar-injection/#how-we-kill-sidecars-using-kubectl-exec","text":"v3.1 and after Kubernetes does not provide a way to kill a single container. You can delete a pod, but this kills all containers, and loses all information and logs of that pod. Instead, try to mimic the Kubernetes termination behavior, which is: SIGTERM PID 1 Wait for the pod's terminateGracePeriodSeconds (30s by default). SIGKILL PID 1 The following are not supported: preStop STOPSIGNAL To do this, it must be possible to run a kubectl exec command that kills the injected sidecar. By default it runs /bin/sh -c 'kill 1' . This can fail: No /bin/sh . Process is not running as PID 1 (which is becoming the default these days due to runAsNonRoot ). Process does not correctly respond to kill 1 (e.g. some shell script weirdness). You can override the kill command by using a pod annotation (where %d is the signal number), for example: spec : podMetadata : annotations : workflows.argoproj.io/kill-cmd-istio-proxy : '[\"pilot-agent\", \"request\", \"POST\", \"quitquitquit\"]' workflows.argoproj.io/kill-cmd-vault-agent : '[\"sh\", \"-c\", \"kill -%d 1\"]' workflows.argoproj.io/kill-cmd-sidecar : '[\"sh\", \"-c\", \"kill -%d $(pidof entrypoint.sh)\"]'","title":"How We Kill Sidecars Using kubectl exec"},{"location":"static-code-analysis/","text":"Static Code Analysis \u00b6 We use the following static code analysis tools: golangci-lint and eslint for compile time linting. Snyk for dependency and image scanning (SCA). These are at least run daily or on each pull request.","title":"Static Code Analysis"},{"location":"static-code-analysis/#static-code-analysis","text":"We use the following static code analysis tools: golangci-lint and eslint for compile time linting. Snyk for dependency and image scanning (SCA). These are at least run daily or on each pull request.","title":"Static Code Analysis"},{"location":"stress-testing/","text":"Stress Testing \u00b6 Install gcloud binary. # Login to GCP: gloud auth login # Set-up your config (if needed): gcloud config set project alex-sb # Create a cluster (default region is us-west-2, if you're not in west of the USA, you might want at different region): gcloud container clusters create-auto argo-workflows-stress-1 # Get credentials: gcloud container clusters get-credentials argo-workflows-stress-1 # Install workflows (If this fails, try running it again): make start PROFILE = stress # Make sure pods are running: kubectl get deployments # Run a test workflow: argo submit examples/hello-world.yaml --watch Checks Open http://localhost:2746/workflows and check it loads and that you can run a workflow. Open http://localhost:9090/metrics and check you can see the Prometheus metrics. Open http://localhost:9091/graph and check you can see a Prometheus graph. You can use this Tab Auto Refresh Chrome extension to auto-refresh the page. Open http://localhost:6060/debug/pprof and check you can access pprof . Run go run ./test/stress/tool -n 10000 to run a large number of workflows. Check Prometheus: See how many Kubernetes API requests are being made. You will see about one Update workflows per reconciliation, multiple Create pods . You should expect to see one Get workflowtemplates per workflow (done on first reconciliation). Otherwise, if you see anything else, that might be a problem. How many errors were logged? log_messages{level=\"error\"} What was the cause? Check PProf to see if there any any hot spots: go tool pprof -png http://localhost:6060/debug/pprof/allocs go tool pprof -png http://localhost:6060/debug/pprof/heap go tool pprof -png http://localhost:6060/debug/pprof/profile Clean-up \u00b6 gcloud container clusters delete argo-workflows-stress-1","title":"Stress Testing"},{"location":"stress-testing/#stress-testing","text":"Install gcloud binary. # Login to GCP: gloud auth login # Set-up your config (if needed): gcloud config set project alex-sb # Create a cluster (default region is us-west-2, if you're not in west of the USA, you might want at different region): gcloud container clusters create-auto argo-workflows-stress-1 # Get credentials: gcloud container clusters get-credentials argo-workflows-stress-1 # Install workflows (If this fails, try running it again): make start PROFILE = stress # Make sure pods are running: kubectl get deployments # Run a test workflow: argo submit examples/hello-world.yaml --watch Checks Open http://localhost:2746/workflows and check it loads and that you can run a workflow. Open http://localhost:9090/metrics and check you can see the Prometheus metrics. Open http://localhost:9091/graph and check you can see a Prometheus graph. You can use this Tab Auto Refresh Chrome extension to auto-refresh the page. Open http://localhost:6060/debug/pprof and check you can access pprof . Run go run ./test/stress/tool -n 10000 to run a large number of workflows. Check Prometheus: See how many Kubernetes API requests are being made. You will see about one Update workflows per reconciliation, multiple Create pods . You should expect to see one Get workflowtemplates per workflow (done on first reconciliation). Otherwise, if you see anything else, that might be a problem. How many errors were logged? log_messages{level=\"error\"} What was the cause? Check PProf to see if there any any hot spots: go tool pprof -png http://localhost:6060/debug/pprof/allocs go tool pprof -png http://localhost:6060/debug/pprof/heap go tool pprof -png http://localhost:6060/debug/pprof/profile","title":"Stress Testing"},{"location":"stress-testing/#clean-up","text":"gcloud container clusters delete argo-workflows-stress-1","title":"Clean-up"},{"location":"survey-data-privacy/","text":"Survey Data Privacy \u00b6 Privacy policy","title":"Survey Data Privacy"},{"location":"survey-data-privacy/#survey-data-privacy","text":"Privacy policy","title":"Survey Data Privacy"},{"location":"suspend-template/","text":"Suspend Template \u00b6 v2.1 See Suspending .","title":"Suspend Template"},{"location":"suspend-template/#suspend-template","text":"v2.1 See Suspending .","title":"Suspend Template"},{"location":"swagger/","text":"API Reference \u00b6 SwaggerUI window.onload = () => { window.ui = SwaggerUIBundle({ url: \"https://raw.githubusercontent.com/argoproj/argo-workflows/main/api/openapi-spec/swagger.json\", dom_id: \"#swagger-ui\", }); };","title":"API Reference"},{"location":"swagger/#api-reference","text":"SwaggerUI window.onload = () => { window.ui = SwaggerUIBundle({ url: \"https://raw.githubusercontent.com/argoproj/argo-workflows/main/api/openapi-spec/swagger.json\", dom_id: \"#swagger-ui\", }); };","title":"API Reference"},{"location":"synchronization/","text":"Synchronization \u00b6 v2.10 and after Introduction \u00b6 Synchronization enables users to limit the parallel execution of certain workflows or templates within a workflow without having to restrict others. Users can create multiple synchronization configurations in the ConfigMap that can be referred to from a workflow or template within a workflow. Alternatively, users can configure a mutex to prevent concurrent execution of templates or workflows using the same mutex. For example: apiVersion : v1 kind : ConfigMap metadata : name : my-config data : workflow : \"1\" # Only one workflow can run at given time in particular namespace template : \"2\" # Two instances of template can run at a given time in particular namespace Workflow-level Synchronization \u00b6 Workflow-level synchronization limits parallel execution of the workflow if workflows have the same synchronization reference. In this example, Workflow refers to workflow synchronization key which is configured as limit 1, so only one workflow instance will be executed at given time even multiple workflows created. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : semaphore : configMapKeyRef : name : my-config key : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : mutex : name : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Template-level Synchronization \u00b6 Template-level synchronization limits parallel execution of the template across workflows, if templates have the same synchronization reference. In this example, acquire-lock template has synchronization reference of template key which is configured as limit 2, so two instances of templates will be executed at a given time: even multiple steps/tasks within workflow or different workflows referring to the same template. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : semaphore : configMapKeyRef : name : my-config key : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : mutex : name : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Examples: Workflow level semaphore Workflow level mutex Step level semaphore Step level mutex Other Parallelism support \u00b6 In addition to this synchronization, the workflow controller supports a parallelism setting that applies to all workflows in the system (it is not granular to a class of workflows, or tasks withing them). Furthermore, there is a parallelism setting at the workflow and template level, but this only restricts total concurrent executions of tasks within the same workflow.","title":"Synchronization"},{"location":"synchronization/#synchronization","text":"v2.10 and after","title":"Synchronization"},{"location":"synchronization/#introduction","text":"Synchronization enables users to limit the parallel execution of certain workflows or templates within a workflow without having to restrict others. Users can create multiple synchronization configurations in the ConfigMap that can be referred to from a workflow or template within a workflow. Alternatively, users can configure a mutex to prevent concurrent execution of templates or workflows using the same mutex. For example: apiVersion : v1 kind : ConfigMap metadata : name : my-config data : workflow : \"1\" # Only one workflow can run at given time in particular namespace template : \"2\" # Two instances of template can run at a given time in particular namespace","title":"Introduction"},{"location":"synchronization/#workflow-level-synchronization","text":"Workflow-level synchronization limits parallel execution of the workflow if workflows have the same synchronization reference. In this example, Workflow refers to workflow synchronization key which is configured as limit 1, so only one workflow instance will be executed at given time even multiple workflows created. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : semaphore : configMapKeyRef : name : my-config key : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : mutex : name : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ]","title":"Workflow-level Synchronization"},{"location":"synchronization/#template-level-synchronization","text":"Template-level synchronization limits parallel execution of the template across workflows, if templates have the same synchronization reference. In this example, acquire-lock template has synchronization reference of template key which is configured as limit 2, so two instances of templates will be executed at a given time: even multiple steps/tasks within workflow or different workflows referring to the same template. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : semaphore : configMapKeyRef : name : my-config key : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : mutex : name : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Examples: Workflow level semaphore Workflow level mutex Step level semaphore Step level mutex","title":"Template-level Synchronization"},{"location":"synchronization/#other-parallelism-support","text":"In addition to this synchronization, the workflow controller supports a parallelism setting that applies to all workflows in the system (it is not granular to a class of workflows, or tasks withing them). Furthermore, there is a parallelism setting at the workflow and template level, but this only restricts total concurrent executions of tasks within the same workflow.","title":"Other Parallelism support"},{"location":"template-defaults/","text":"Template Defaults \u00b6 v3.1 and after Introduction \u00b6 TemplateDefaults feature enables the user to configure the default template values in workflow spec level that will apply to all the templates in the workflow. If the template has a value that also has a default value in templateDefault , the Template's value will take precedence. These values will be applied during the runtime. Template values and default values are merged using Kubernetes strategic merge patch. To check whether and how list values are merged, inspect the patchStrategy and patchMergeKey tags in the workflow definition . Configuring templateDefaults in WorkflowSpec \u00b6 For example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : template-defaults-example spec : entrypoint : main templateDefaults : timeout : 30s # timeout value will be applied to all templates retryStrategy : # retryStrategy value will be applied to all templates limit : 2 templates : - name : main container : image : docker/whalesay:latest template defaults example Configuring templateDefaults in Controller Level \u00b6 Operator can configure the templateDefaults in workflow defaults . This templateDefault will be applied to all the workflow which runs on the controller. The following would be specified in the Config Map: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 templateDefaults: timeout: 30s","title":"Template Defaults"},{"location":"template-defaults/#template-defaults","text":"v3.1 and after","title":"Template Defaults"},{"location":"template-defaults/#introduction","text":"TemplateDefaults feature enables the user to configure the default template values in workflow spec level that will apply to all the templates in the workflow. If the template has a value that also has a default value in templateDefault , the Template's value will take precedence. These values will be applied during the runtime. Template values and default values are merged using Kubernetes strategic merge patch. To check whether and how list values are merged, inspect the patchStrategy and patchMergeKey tags in the workflow definition .","title":"Introduction"},{"location":"template-defaults/#configuring-templatedefaults-in-workflowspec","text":"For example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : template-defaults-example spec : entrypoint : main templateDefaults : timeout : 30s # timeout value will be applied to all templates retryStrategy : # retryStrategy value will be applied to all templates limit : 2 templates : - name : main container : image : docker/whalesay:latest template defaults example","title":"Configuring templateDefaults in WorkflowSpec"},{"location":"template-defaults/#configuring-templatedefaults-in-controller-level","text":"Operator can configure the templateDefaults in workflow defaults . This templateDefault will be applied to all the workflow which runs on the controller. The following would be specified in the Config Map: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 templateDefaults: timeout: 30s","title":"Configuring templateDefaults in Controller Level"},{"location":"tls/","text":"Transport Layer Security \u00b6 v2.8 and after If you're running Argo Server you have three options with increasing transport security (note - you should also be running authentication ): Default configuration \u00b6 v2.8 - 2.12 Defaults to Plain Text v3.0 and after Defaults to Encrypted if cert is available Argo image/deployment defaults to Encrypted with a self-signed certificate which expires after 365 days. Plain Text \u00b6 Recommended for: development. Everything is sent in plain text. Start Argo Server with the --secure=false (or ARGO_SECURE=false ) flag, e.g.: export ARGO_SECURE = false argo server --secure = false To secure the UI you may front it with a HTTPS proxy. Encrypted \u00b6 Recommended for: development and test environments. You can encrypt connections without any real effort. Start Argo Server with the --secure flag, e.g.: argo server --secure It will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) and --insecure-skip-verify (or ARGO_INSECURE_SKIP_VERIFY=true ). argo --secure --insecure-skip-verify list export ARGO_SECURE = true export ARGO_INSECURE_SKIP_VERIFY = true argo --secure --insecure-skip-verify list Tip: Don't forget to update your readiness probe to use HTTPS. To do so, edit your argo-server Deployment's readinessProbe spec: readinessProbe : httpGet : scheme : HTTPS Encrypted and Verified \u00b6 Recommended for: production environments. Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates (this is out of scope of this documentation). Start Argo Server with the --secure flag, e.g.: argo server --secure As before, it will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) only. argo --secure list export ARGO_SECURE = true argo list TLS Min Version \u00b6 Set TLS_MIN_VERSION to be the minimum TLS version to use. This is v1.2 by default. This must be one of these int values . Version Value v1.0 769 v1.1 770 v1.2 771 v1.3 772","title":"Transport Layer Security"},{"location":"tls/#transport-layer-security","text":"v2.8 and after If you're running Argo Server you have three options with increasing transport security (note - you should also be running authentication ):","title":"Transport Layer Security"},{"location":"tls/#default-configuration","text":"v2.8 - 2.12 Defaults to Plain Text v3.0 and after Defaults to Encrypted if cert is available Argo image/deployment defaults to Encrypted with a self-signed certificate which expires after 365 days.","title":"Default configuration"},{"location":"tls/#plain-text","text":"Recommended for: development. Everything is sent in plain text. Start Argo Server with the --secure=false (or ARGO_SECURE=false ) flag, e.g.: export ARGO_SECURE = false argo server --secure = false To secure the UI you may front it with a HTTPS proxy.","title":"Plain Text"},{"location":"tls/#encrypted","text":"Recommended for: development and test environments. You can encrypt connections without any real effort. Start Argo Server with the --secure flag, e.g.: argo server --secure It will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) and --insecure-skip-verify (or ARGO_INSECURE_SKIP_VERIFY=true ). argo --secure --insecure-skip-verify list export ARGO_SECURE = true export ARGO_INSECURE_SKIP_VERIFY = true argo --secure --insecure-skip-verify list Tip: Don't forget to update your readiness probe to use HTTPS. To do so, edit your argo-server Deployment's readinessProbe spec: readinessProbe : httpGet : scheme : HTTPS","title":"Encrypted"},{"location":"tls/#encrypted-and-verified","text":"Recommended for: production environments. Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates (this is out of scope of this documentation). Start Argo Server with the --secure flag, e.g.: argo server --secure As before, it will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) only. argo --secure list export ARGO_SECURE = true argo list","title":"Encrypted and Verified"},{"location":"tls/#tls-min-version","text":"Set TLS_MIN_VERSION to be the minimum TLS version to use. This is v1.2 by default. This must be one of these int values . Version Value v1.0 769 v1.1 770 v1.2 771 v1.3 772","title":"TLS Min Version"},{"location":"tolerating-pod-deletion/","text":"Tolerating Pod Deletion \u00b6 v2.12 and after In Kubernetes, pods are cattle and can be deleted at any time. Deletion could be manually via kubectl delete pod , during a node drain, or for other reasons. This can be very inconvenient, your workflow will error, but for reasons outside of your control. A pod disruption budget can reduce the likelihood of this happening. But, it cannot entirely prevent it. To retry pods that were deleted, set retryStrategy.retryPolicy: OnError . This can be set at a workflow-level, template-level, or globally (using workflow defaults ) Example \u00b6 Run the following workflow (which will sleep for 30s): apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : example spec : retryStrategy : retryPolicy : OnError limit : 1 entrypoint : main templates : - name : main container : image : docker/whalesay:latest command : - sleep - 30s Then execute kubectl delete pod example . You'll see that the errored node is automatically retried. \ud83d\udca1 Read more on architecting workflows for reliability .","title":"Tolerating Pod Deletion"},{"location":"tolerating-pod-deletion/#tolerating-pod-deletion","text":"v2.12 and after In Kubernetes, pods are cattle and can be deleted at any time. Deletion could be manually via kubectl delete pod , during a node drain, or for other reasons. This can be very inconvenient, your workflow will error, but for reasons outside of your control. A pod disruption budget can reduce the likelihood of this happening. But, it cannot entirely prevent it. To retry pods that were deleted, set retryStrategy.retryPolicy: OnError . This can be set at a workflow-level, template-level, or globally (using workflow defaults )","title":"Tolerating Pod Deletion"},{"location":"tolerating-pod-deletion/#example","text":"Run the following workflow (which will sleep for 30s): apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : example spec : retryStrategy : retryPolicy : OnError limit : 1 entrypoint : main templates : - name : main container : image : docker/whalesay:latest command : - sleep - 30s Then execute kubectl delete pod example . You'll see that the errored node is automatically retried. \ud83d\udca1 Read more on architecting workflows for reliability .","title":"Example"},{"location":"training/","text":"Training \u00b6 Videos \u00b6 We also have a YouTube playlist of videos that includes workshops you can follow along with: Open the playlist Hands-On \u00b6 We've created a Killercoda course featuring beginner and intermediate lessons . These allow to you try out Argo Workflows in your web browser without needing to install anything on your computer. Each lesson starts up a Kubernetes cluster that you can access via a web browser. Additional resources \u00b6 Visit the awesome-argo GitHub repo for more educational resources.","title":"Training"},{"location":"training/#training","text":"","title":"Training"},{"location":"training/#videos","text":"We also have a YouTube playlist of videos that includes workshops you can follow along with: Open the playlist","title":"Videos"},{"location":"training/#hands-on","text":"We've created a Killercoda course featuring beginner and intermediate lessons . These allow to you try out Argo Workflows in your web browser without needing to install anything on your computer. Each lesson starts up a Kubernetes cluster that you can access via a web browser.","title":"Hands-On"},{"location":"training/#additional-resources","text":"Visit the awesome-argo GitHub repo for more educational resources.","title":"Additional resources"},{"location":"upgrading/","text":"Upgrading Guide \u00b6 Breaking changes typically (sometimes we don't realise they are breaking) have \"!\" in the commit message, as per the conventional commits . Upgrading to v3.5 \u00b6 There are no known breaking changes in this release. Please file an issue if you encounter any unexpected problems after upgrading. Upgrading to v3.4 \u00b6 Non-Emissary executors are removed. ( #7829 ) \u00b6 Emissary executor is now the only supported executor. If you are using other executors, e.g. docker, k8sapi, pns, and kubelet, you need to remove your containerRuntimeExecutors and containerRuntimeExecutor from your controller's configmap. If you have workflows that use different executors with the label workflows.argoproj.io/container-runtime-executor , this is no longer supported and will not be effective. chore!: Remove dataflow pipelines from codebase. (#9071) \u00b6 You are affected if you are using dataflow pipelines in the UI or via the /pipelines endpoint. We no longer support dataflow pipelines and all relevant code has been removed. feat!: Add entrypoint lookup. Fixes #8344 \u00b6 Affected if: Using the Emissary executor. Used the args field for any entry in images . This PR automatically looks up the command and entrypoint. The implementation for config look-up was incorrect (it allowed you to specify args but not entrypoint ). args has been removed to correct the behaviour. If you are incorrectly configured, the workflow controller will error on start-up. Actions \u00b6 You don't need to configure images that use v2 manifests anymore. You can just remove them (e.g. argoproj/argosay:v2): % docker manifest inspect argoproj/argosay:v2 ... \"schemaVersion\" : 2 , ... For v1 manifests (e.g. docker/whalesay:latest): % docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' docker/whalesay:latest [] [ /bin/bash ] images : docker/whalesay:latest : cmd : [ /bin/bash ] feat: Fail on invalid config. (#8295) \u00b6 The workflow controller will error on start-up if incorrectly configured, rather than silently ignoring mis-configuration. Failed to register watch for controller config map: error unmarshaling JSON: while decoding JSON: json: unknown field \\\"args\\\" feat: add indexes for improve archived workflow performance. (#8860) \u00b6 This PR adds indexes to archived workflow tables. This change may cause a long time to upgrade if the user has a large table. feat: enhance artifact visualization (#8655) \u00b6 For AWS users using S3: visualizing artifacts in the UI and downloading them now requires an additional \"Action\" to be configured in your S3 bucket policy: \"ListBucket\". Upgrading to v3.3 \u00b6 662a7295b feat: Replace patch pod with create workflowtaskresult . Fixes #3961 (#8000) \u00b6 The PR changes the permissions that can be used by a workflow to remove the pod patch permission. See workflow RBAC and #8013 . 06d4bf76f fix: Reduce agent permissions. Fixes #7986 (#7987) \u00b6 The PR changes the permissions used by the agent to report back the outcome of HTTP template requests. The permission patch workflowtasksets/status replaces patch workflowtasksets , for example: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : agent rules : - apiGroups : - argoproj.io resources : - workflowtasksets/status verbs : - patch Workflows running during any upgrade should be give both permissions. See #8013 . feat!: Remove deprecated config flags \u00b6 This PR removes the following configmap items - executorImage (use executor.image in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executorImage : argoproj/argocli:latest ... From now and onwards, only provide the executor image in workflow controller as a command argument as shown below: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executor : | image: argoproj/argocli:latest ... executorImagePullPolicy (use executor.imagePullPolicy in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorImagePullPolicy : IfNotPresent ... Change it as shown below: data : ... executor : | imagePullPolicy: IfNotPresent ... executorResources (use executor.resources in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorResources : requests : cpu : 0.1 memory : 64Mi limits : cpu : 0.5 memory : 512Mi ... Change it as shown below: data : ... executor : | resources: requests: cpu: 0.1 memory: 64Mi limits: cpu: 0.5 memory: 512Mi ... fce82d572 feat: Remove pod workers (#7837) \u00b6 This PR removes pod workers from the code, the pod informer directly writes into the workflow queue. As a result the --pod-workers flag has been removed. 93c11a24ff feat: Add TLS to Metrics and Telemetry servers (#7041) \u00b6 This PR adds the ability to send metrics over TLS with a self-signed certificate. In v3.5 this will be enabled by default, so it is recommended that users enable this functionality now. 0758eab11 feat(server)!: Sync dispatch of webhook events by default \u00b6 This is not expected to impact users. Events dispatch in the Argo Server has been change from async to sync by default. This is so that errors are surfaced to the client, rather than only appearing as logs or Kubernetes events. It is possible that response times under load are too long for your client and you may prefer to revert this behaviour. To revert this behaviour, restart Argo Server with ARGO_EVENT_ASYNC_DISPATCH=true . Make sure that asyncDispatch=true is logged. bd49c6303 fix(artifact)!: default https to any URL missing a scheme. Fixes #6973 \u00b6 HTTPArtifact without a scheme will now defaults to https instead of http user need to explicitly include a http prefix if they want to retrieve HTTPArtifact through http chore!: Remove the hidden flag --verify from argo submit \u00b6 The hidden flag --verify has been removed from argo submit . This is a internal testing flag we don't need anymore. Upgrading to v3.2 \u00b6 e5b131a33 feat: Add template node to pod name. Fixes #1319 (#6712) \u00b6 This add the template name to the pod name, to make it easier to understand which pod ran which step. This behaviour can be reverted by setting POD_NAMES=v1 on the workflow controller. be63efe89 feat(executor)!: Change argoexec base image to alpine. Closes #5720 (#6006) \u00b6 Changing from Debian to Alpine reduces the size of the argoexec image, resulting is faster starting workflow pods, and it also reduce the risk of security issues. There is not such thing as a free lunch. There maybe other behaviour changes we don't know of yet. Some users found this change prevented workflow with very large parameters from running. See #7586 48d7ad3 chore: Remove onExit naming transition scaffolding code (#6297) \u00b6 When upgrading from v3.2 workflows that are running at the time of the upgrade and have onExit steps may experience the onExit step running twice. This is only applicable for workflows that began running before a workflow-controller upgrade and are still running after the upgrade is complete. This is only applicable for upgrading from v2.12 or earlier directly to v3.2 or later. Even under these conditions, duplicate work may not be experienced. Upgrading to v3.1 \u00b6 3fff791e4 build!: Automatically add manifests to v* tags (#5880) \u00b6 The manifests in the repository on the tag will no longer contain the image tag, instead they will contain :latest . You must not get your manifests from the Git repository, you must get them from the release notes. You must not use the stable tag. This is defunct, and will be removed in v3.1. ab361667a feat(controller) Emissary executor. (#4925) \u00b6 The Emissary executor is not a breaking change per-se, but it is brand new so we would not recommend you use it by default yet. Instead, we recommend you test it out on some workflows using a workflow-controller-configmap configuration . # Specifies the executor to use. # # You can use this to: # * Tailor your executor based on your preference for security or performance. # * Test out an executor without committing yourself to use it for every workflow. # # To find out which executor was actually use, see the `wait` container logs. # # The list is in order of precedence; the first matching executor is used. # This has precedence over `containerRuntimeExecutor`. containerRuntimeExecutors : | - name: emissary selector: matchLabels: workflows.argoproj.io/container-runtime-executor: emissary be63efe89 feat(controller): Expression template tags. Resolves #4548 & #1293 (#5115) \u00b6 This PR introduced a new expression syntax know as \"expression tag template\". A user has reported that this does not always play nicely with the when condition syntax (Goevaluate). This can be resolved using a single quote in your when expression: when : \"'{{inputs.parameters.should-print}}' != '2021-01-01'\" Learn more Upgrading to v3.0 \u00b6 defbd600e fix: Default ARGO_SECURE=true. Fixes #5607 (#5626) \u00b6 The server now starts with TLS enabled by default if a key is available. The original behaviour can be configured with --secure=false . If you have an ingress, you may need to add the appropriate annotations:(varies by ingress): alb.ingress.kubernetes.io/backend-protocol : HTTPS nginx.ingress.kubernetes.io/backend-protocol : HTTPS 01d310235 chore(server)!: Required authentication by default. Resolves #5206 (#5211) \u00b6 To login to the user interface, you must provide a login token. The original behaviour can be configured with --auth-mode=server . f31e0c6f9 chore!: Remove deprecated fields (#5035) \u00b6 Some fields that were deprecated in early 2020 have been removed. Field Action template.template and template.templateRef The workflow spec must be changed to use steps or DAG, otherwise the workflow will error. spec.ttlSecondsAfterFinished change to spec.ttlStrategy.secondsAfterCompletion , otherwise the workflow will not be garbage collected as expected. To find impacted workflows: kubectl get wf --all-namespaces -o yaml | grep templateRef kubectl get wf --all-namespaces -o yaml | grep ttlSecondsAfterFinished c8215f972 feat(controller)!: Key-only artifacts. Fixes #3184 (#4618) \u00b6 This change is not breaking per-se, but many users do not appear to aware of artifact repository ref , so check your usage of that feature if you have problems.","title":"Upgrading Guide"},{"location":"upgrading/#upgrading-guide","text":"Breaking changes typically (sometimes we don't realise they are breaking) have \"!\" in the commit message, as per the conventional commits .","title":"Upgrading Guide"},{"location":"upgrading/#upgrading-to-v35","text":"There are no known breaking changes in this release. Please file an issue if you encounter any unexpected problems after upgrading.","title":"Upgrading to v3.5"},{"location":"upgrading/#upgrading-to-v34","text":"","title":"Upgrading to v3.4"},{"location":"upgrading/#non-emissary-executors-are-removed-7829","text":"Emissary executor is now the only supported executor. If you are using other executors, e.g. docker, k8sapi, pns, and kubelet, you need to remove your containerRuntimeExecutors and containerRuntimeExecutor from your controller's configmap. If you have workflows that use different executors with the label workflows.argoproj.io/container-runtime-executor , this is no longer supported and will not be effective.","title":"Non-Emissary executors are removed. (#7829)"},{"location":"upgrading/#chore-remove-dataflow-pipelines-from-codebase-9071","text":"You are affected if you are using dataflow pipelines in the UI or via the /pipelines endpoint. We no longer support dataflow pipelines and all relevant code has been removed.","title":"chore!: Remove dataflow pipelines from codebase. (#9071)"},{"location":"upgrading/#feat-add-entrypoint-lookup-fixes-8344","text":"Affected if: Using the Emissary executor. Used the args field for any entry in images . This PR automatically looks up the command and entrypoint. The implementation for config look-up was incorrect (it allowed you to specify args but not entrypoint ). args has been removed to correct the behaviour. If you are incorrectly configured, the workflow controller will error on start-up.","title":"feat!: Add entrypoint lookup. Fixes #8344"},{"location":"upgrading/#actions","text":"You don't need to configure images that use v2 manifests anymore. You can just remove them (e.g. argoproj/argosay:v2): % docker manifest inspect argoproj/argosay:v2 ... \"schemaVersion\" : 2 , ... For v1 manifests (e.g. docker/whalesay:latest): % docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' docker/whalesay:latest [] [ /bin/bash ] images : docker/whalesay:latest : cmd : [ /bin/bash ]","title":"Actions"},{"location":"upgrading/#feat-fail-on-invalid-config-8295","text":"The workflow controller will error on start-up if incorrectly configured, rather than silently ignoring mis-configuration. Failed to register watch for controller config map: error unmarshaling JSON: while decoding JSON: json: unknown field \\\"args\\\"","title":"feat: Fail on invalid config. (#8295)"},{"location":"upgrading/#feat-add-indexes-for-improve-archived-workflow-performance-8860","text":"This PR adds indexes to archived workflow tables. This change may cause a long time to upgrade if the user has a large table.","title":"feat: add indexes for improve archived workflow performance. (#8860)"},{"location":"upgrading/#feat-enhance-artifact-visualization-8655","text":"For AWS users using S3: visualizing artifacts in the UI and downloading them now requires an additional \"Action\" to be configured in your S3 bucket policy: \"ListBucket\".","title":"feat: enhance artifact visualization (#8655)"},{"location":"upgrading/#upgrading-to-v33","text":"","title":"Upgrading to v3.3"},{"location":"upgrading/#662a7295b-feat-replace-patch-pod-with-create-workflowtaskresult-fixes-3961-8000","text":"The PR changes the permissions that can be used by a workflow to remove the pod patch permission. See workflow RBAC and #8013 .","title":"662a7295b feat: Replace patch pod with create workflowtaskresult. Fixes #3961 (#8000)"},{"location":"upgrading/#06d4bf76f-fix-reduce-agent-permissions-fixes-7986-7987","text":"The PR changes the permissions used by the agent to report back the outcome of HTTP template requests. The permission patch workflowtasksets/status replaces patch workflowtasksets , for example: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : agent rules : - apiGroups : - argoproj.io resources : - workflowtasksets/status verbs : - patch Workflows running during any upgrade should be give both permissions. See #8013 .","title":"06d4bf76f fix: Reduce agent permissions. Fixes #7986 (#7987)"},{"location":"upgrading/#feat-remove-deprecated-config-flags","text":"This PR removes the following configmap items - executorImage (use executor.image in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executorImage : argoproj/argocli:latest ... From now and onwards, only provide the executor image in workflow controller as a command argument as shown below: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executor : | image: argoproj/argocli:latest ... executorImagePullPolicy (use executor.imagePullPolicy in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorImagePullPolicy : IfNotPresent ... Change it as shown below: data : ... executor : | imagePullPolicy: IfNotPresent ... executorResources (use executor.resources in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorResources : requests : cpu : 0.1 memory : 64Mi limits : cpu : 0.5 memory : 512Mi ... Change it as shown below: data : ... executor : | resources: requests: cpu: 0.1 memory: 64Mi limits: cpu: 0.5 memory: 512Mi ...","title":"feat!: Remove deprecated config flags"},{"location":"upgrading/#fce82d572-feat-remove-pod-workers-7837","text":"This PR removes pod workers from the code, the pod informer directly writes into the workflow queue. As a result the --pod-workers flag has been removed.","title":"fce82d572 feat: Remove pod workers (#7837)"},{"location":"upgrading/#93c11a24ff-feat-add-tls-to-metrics-and-telemetry-servers-7041","text":"This PR adds the ability to send metrics over TLS with a self-signed certificate. In v3.5 this will be enabled by default, so it is recommended that users enable this functionality now.","title":"93c11a24ff feat: Add TLS to Metrics and Telemetry servers (#7041)"},{"location":"upgrading/#0758eab11-featserver-sync-dispatch-of-webhook-events-by-default","text":"This is not expected to impact users. Events dispatch in the Argo Server has been change from async to sync by default. This is so that errors are surfaced to the client, rather than only appearing as logs or Kubernetes events. It is possible that response times under load are too long for your client and you may prefer to revert this behaviour. To revert this behaviour, restart Argo Server with ARGO_EVENT_ASYNC_DISPATCH=true . Make sure that asyncDispatch=true is logged.","title":"0758eab11 feat(server)!: Sync dispatch of webhook events by default"},{"location":"upgrading/#bd49c6303-fixartifact-default-https-to-any-url-missing-a-scheme-fixes-6973","text":"HTTPArtifact without a scheme will now defaults to https instead of http user need to explicitly include a http prefix if they want to retrieve HTTPArtifact through http","title":"bd49c6303 fix(artifact)!: default https to any URL missing a scheme. Fixes #6973"},{"location":"upgrading/#chore-remove-the-hidden-flag-verify-from-argo-submit","text":"The hidden flag --verify has been removed from argo submit . This is a internal testing flag we don't need anymore.","title":"chore!: Remove the hidden flag --verify from argo submit"},{"location":"upgrading/#upgrading-to-v32","text":"","title":"Upgrading to v3.2"},{"location":"upgrading/#e5b131a33-feat-add-template-node-to-pod-name-fixes-1319-6712","text":"This add the template name to the pod name, to make it easier to understand which pod ran which step. This behaviour can be reverted by setting POD_NAMES=v1 on the workflow controller.","title":"e5b131a33 feat: Add template node to pod name. Fixes #1319 (#6712)"},{"location":"upgrading/#be63efe89-featexecutor-change-argoexec-base-image-to-alpine-closes-5720-6006","text":"Changing from Debian to Alpine reduces the size of the argoexec image, resulting is faster starting workflow pods, and it also reduce the risk of security issues. There is not such thing as a free lunch. There maybe other behaviour changes we don't know of yet. Some users found this change prevented workflow with very large parameters from running. See #7586","title":"be63efe89 feat(executor)!: Change argoexec base image to alpine. Closes #5720 (#6006)"},{"location":"upgrading/#48d7ad3-chore-remove-onexit-naming-transition-scaffolding-code-6297","text":"When upgrading from v3.2 workflows that are running at the time of the upgrade and have onExit steps may experience the onExit step running twice. This is only applicable for workflows that began running before a workflow-controller upgrade and are still running after the upgrade is complete. This is only applicable for upgrading from v2.12 or earlier directly to v3.2 or later. Even under these conditions, duplicate work may not be experienced.","title":"48d7ad3 chore: Remove onExit naming transition scaffolding code (#6297)"},{"location":"upgrading/#upgrading-to-v31","text":"","title":"Upgrading to v3.1"},{"location":"upgrading/#3fff791e4-build-automatically-add-manifests-to-v-tags-5880","text":"The manifests in the repository on the tag will no longer contain the image tag, instead they will contain :latest . You must not get your manifests from the Git repository, you must get them from the release notes. You must not use the stable tag. This is defunct, and will be removed in v3.1.","title":"3fff791e4 build!: Automatically add manifests to v* tags (#5880)"},{"location":"upgrading/#ab361667a-featcontroller-emissary-executor-4925","text":"The Emissary executor is not a breaking change per-se, but it is brand new so we would not recommend you use it by default yet. Instead, we recommend you test it out on some workflows using a workflow-controller-configmap configuration . # Specifies the executor to use. # # You can use this to: # * Tailor your executor based on your preference for security or performance. # * Test out an executor without committing yourself to use it for every workflow. # # To find out which executor was actually use, see the `wait` container logs. # # The list is in order of precedence; the first matching executor is used. # This has precedence over `containerRuntimeExecutor`. containerRuntimeExecutors : | - name: emissary selector: matchLabels: workflows.argoproj.io/container-runtime-executor: emissary","title":"ab361667a feat(controller) Emissary executor. (#4925)"},{"location":"upgrading/#be63efe89-featcontroller-expression-template-tags-resolves-4548-1293-5115","text":"This PR introduced a new expression syntax know as \"expression tag template\". A user has reported that this does not always play nicely with the when condition syntax (Goevaluate). This can be resolved using a single quote in your when expression: when : \"'{{inputs.parameters.should-print}}' != '2021-01-01'\" Learn more","title":"be63efe89 feat(controller): Expression template tags. Resolves #4548 & #1293 (#5115)"},{"location":"upgrading/#upgrading-to-v30","text":"","title":"Upgrading to v3.0"},{"location":"upgrading/#defbd600e-fix-default-argo_securetrue-fixes-5607-5626","text":"The server now starts with TLS enabled by default if a key is available. The original behaviour can be configured with --secure=false . If you have an ingress, you may need to add the appropriate annotations:(varies by ingress): alb.ingress.kubernetes.io/backend-protocol : HTTPS nginx.ingress.kubernetes.io/backend-protocol : HTTPS","title":"defbd600e fix: Default ARGO_SECURE=true. Fixes #5607 (#5626)"},{"location":"upgrading/#01d310235-choreserver-required-authentication-by-default-resolves-5206-5211","text":"To login to the user interface, you must provide a login token. The original behaviour can be configured with --auth-mode=server .","title":"01d310235 chore(server)!: Required authentication by default. Resolves #5206 (#5211)"},{"location":"upgrading/#f31e0c6f9-chore-remove-deprecated-fields-5035","text":"Some fields that were deprecated in early 2020 have been removed. Field Action template.template and template.templateRef The workflow spec must be changed to use steps or DAG, otherwise the workflow will error. spec.ttlSecondsAfterFinished change to spec.ttlStrategy.secondsAfterCompletion , otherwise the workflow will not be garbage collected as expected. To find impacted workflows: kubectl get wf --all-namespaces -o yaml | grep templateRef kubectl get wf --all-namespaces -o yaml | grep ttlSecondsAfterFinished","title":"f31e0c6f9 chore!: Remove deprecated fields (#5035)"},{"location":"upgrading/#c8215f972-featcontroller-key-only-artifacts-fixes-3184-4618","text":"This change is not breaking per-se, but many users do not appear to aware of artifact repository ref , so check your usage of that feature if you have problems.","title":"c8215f972 feat(controller)!: Key-only artifacts. Fixes #3184 (#4618)"},{"location":"variables/","text":"Workflow Variables \u00b6 Some fields in a workflow specification allow for variable references which are automatically substituted by Argo. How to use variables \u00b6 Variables are enclosed in curly braces: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The following variables are made available to reference various meta-data of a workflow: Template Tag Kinds \u00b6 There are two kinds of template tag: simple The default, e.g. {{workflow.name}} expression Where {{ is immediately followed by = , e.g. {{=workflow.name}} . Simple \u00b6 The tag is substituted with the variable that has a name the same as the tag. Simple tags may have white-space between the brackets and variable as seen below. However, there is a known issue where variables may fail to interpolate with white-space, so it is recommended to avoid using white-space until this issue is resolved. Please report unexpected behavior with reproducible examples. args : [ \"{{ inputs.parameters.message }}\" ] Expression \u00b6 Since v3.1 The tag is substituted with the result of evaluating the tag as an expression. Note that any hyphenated parameter names or step names will cause a parsing error. You can reference them by indexing into the parameter or step map, e.g. inputs.parameters['my-param'] or steps['my-step'].outputs.result . Learn about the expression syntax . Examples \u00b6 Plain list: [1, 2] Filter a list: filter([1, 2], { # > 1}) Map a list: map([1, 2], { # * 2 }) We provide some core functions: Cast to int: asInt(inputs.parameters['my-int-param']) Cast to float: asFloat(inputs.parameters['my-float-param']) Cast to string: string(1) Convert to a JSON string (needed for withParam ): toJson([1, 2]) Extract data from JSON: jsonpath(inputs.parameters.json, '$.some.path') You can also use Sprig functions : Trim a string: sprig.trim(inputs.parameters['my-string-param']) Sprig error handling Sprig functions often do not raise errors. For example, if int is used on an invalid value, it returns 0 . Please review the Sprig documentation to understand which functions raise errors and which do not. Reference \u00b6 All Templates \u00b6 Variable Description inputs.parameters. Input parameter to a template inputs.parameters All input parameters to a template as a JSON string inputs.artifacts. Input artifact to a template node.name Full name of the node Steps Templates \u00b6 Variable Description steps.name Name of the step steps..id unique id of container step steps..ip IP address of a previous daemon container step steps..status Phase status of any previous step steps..exitCode Exit code of any previous script or container step steps..startedAt Time-stamp when the step started steps..finishedAt Time-stamp when the step finished steps..hostNodeName Host node where task ran (available from version 3.5) steps..outputs.result Output result of any previous container or script step steps..outputs.parameters When the previous step uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation steps..outputs.parameters. Output parameter of any previous step. When the previous step uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation steps..outputs.artifacts. Output artifact of any previous step DAG Templates \u00b6 Variable Description tasks.name Name of the task tasks..id unique id of container task tasks..ip IP address of a previous daemon container task tasks..status Phase status of any previous task tasks..exitCode Exit code of any previous script or container task tasks..startedAt Time-stamp when the task started tasks..finishedAt Time-stamp when the task finished tasks..hostNodeName Host node where task ran (available from version 3.5) tasks..outputs.result Output result of any previous container or script task tasks..outputs.parameters When the previous task uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation tasks..outputs.parameters. Output parameter of any previous task. When the previous task uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation tasks..outputs.artifacts. Output artifact of any previous task HTTP Templates \u00b6 Since v3.3 Only available for successCondition Variable Description request.method Request method ( string ) request.url Request URL ( string ) request.body Request body ( string ) request.headers Request headers ( map[string][]string ) response.statusCode Response status code ( int ) response.body Response body ( string ) response.headers Response headers ( map[string][]string ) RetryStrategy \u00b6 When using the expression field within retryStrategy , special variables are available. Variable Description lastRetry.exitCode Exit code of the last retry lastRetry.status Status of the last retry lastRetry.duration Duration in seconds of the last retry lastRetry.message Message output from the last retry (available from version 3.5) Note: These variables evaluate to a string type. If using advanced expressions, either cast them to int values ( expression: \"{{=asInt(lastRetry.exitCode) >= 2}}\" ) or compare them to string values ( expression: \"{{=lastRetry.exitCode != '2'}}\" ). Container/Script Templates \u00b6 Variable Description pod.name Pod name of the container/script retries The retry number of the container/script if retryStrategy is specified inputs.artifacts..path Local path of the input artifact outputs.artifacts..path Local path of the output artifact outputs.parameters..path Local path of the output parameter Loops ( withItems / withParam ) \u00b6 Variable Description item Value of the item in a list item. Field value of the item in a list of maps Metrics \u00b6 When emitting custom metrics in a template , special variables are available that allow self-reference to the current step. Variable Description status Phase status of the metric-emitting template duration Duration of the metric-emitting template in seconds (only applicable in Template -level metrics, for Workflow -level use workflow.duration ) exitCode Exit code of the metric-emitting template inputs.parameters. Input parameter of the metric-emitting template outputs.parameters. Output parameter of the metric-emitting template outputs.result Output result of the metric-emitting template resourcesDuration.{cpu,memory} Resources duration in seconds . Must be one of resourcesDuration.cpu or resourcesDuration.memory , if available. For more info, see the Resource Duration doc. Real-Time Metrics \u00b6 Some variables can be emitted in real-time (as opposed to just when the step/task completes). To emit these variables in real time, set realtime: true under gauge (note: only Gauge metrics allow for real time variable emission). Metrics currently available for real time emission: For Workflow -level metrics: workflow.duration For Template -level metrics: duration Global \u00b6 Variable Description workflow.name Workflow name workflow.namespace Workflow namespace workflow.mainEntrypoint Workflow's initial entrypoint workflow.serviceAccountName Workflow service account name workflow.uid Workflow UID. Useful for setting ownership reference to a resource, or a unique artifact location workflow.parameters. Input parameter to the workflow workflow.parameters All input parameters to the workflow as a JSON string (this is deprecated in favor of workflow.parameters.json as this doesn't work with expression tags and that does) workflow.parameters.json All input parameters to the workflow as a JSON string workflow.outputs.parameters. Global parameter in the workflow workflow.outputs.artifacts. Global artifact in the workflow workflow.annotations. Workflow annotations workflow.annotations.json all Workflow annotations as a JSON string workflow.labels. Workflow labels workflow.labels.json all Workflow labels as a JSON string workflow.creationTimestamp Workflow creation time-stamp formatted in RFC 3339 (e.g. 2018-08-23T05:42:49Z ) workflow.creationTimestamp. Creation time-stamp formatted with a strftime format character. workflow.creationTimestamp.RFC3339 Creation time-stamp formatted with in RFC 3339. workflow.priority Workflow priority workflow.duration Workflow duration estimate in seconds, may differ from actual duration by a couple of seconds workflow.scheduledTime Scheduled runtime formatted in RFC 3339 (only available for CronWorkflow ) Exit Handler \u00b6 Variable Description workflow.status Workflow status. One of: Succeeded , Failed , Error workflow.failures A list of JSON objects containing information about nodes that failed or errored during execution. Available fields: displayName , message , templateName , phase , podName , and finishedAt . Knowing where you are \u00b6 The idea with creating a WorkflowTemplate is that they are reusable bits of code you will use in many actual Workflows. Sometimes it is useful to know which workflow you are part of. workflow.mainEntrypoint is one way you can do this. If each of your actual workflows has a differing entrypoint, you can identify the workflow you're part of. Given this use in a WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : say-main-entrypoint spec : entrypoint : echo templates : - name : echo container : image : alpine command : [ echo ] args : [ \"{{workflow.mainEntrypoint}}\" ] I can distinguish my caller: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : foo- spec : entrypoint : foo templates : - name : foo steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of foo apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : bar- spec : entrypoint : bar templates : - name : bar steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of bar This shouldn't be that helpful in logging, you should be able to identify workflows through other labels in your cluster's log tool, but can be helpful when generating metrics for the workflow for example.","title":"Workflow Variables"},{"location":"variables/#workflow-variables","text":"Some fields in a workflow specification allow for variable references which are automatically substituted by Argo.","title":"Workflow Variables"},{"location":"variables/#how-to-use-variables","text":"Variables are enclosed in curly braces: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The following variables are made available to reference various meta-data of a workflow:","title":"How to use variables"},{"location":"variables/#template-tag-kinds","text":"There are two kinds of template tag: simple The default, e.g. {{workflow.name}} expression Where {{ is immediately followed by = , e.g. {{=workflow.name}} .","title":"Template Tag Kinds"},{"location":"variables/#simple","text":"The tag is substituted with the variable that has a name the same as the tag. Simple tags may have white-space between the brackets and variable as seen below. However, there is a known issue where variables may fail to interpolate with white-space, so it is recommended to avoid using white-space until this issue is resolved. Please report unexpected behavior with reproducible examples. args : [ \"{{ inputs.parameters.message }}\" ]","title":"Simple"},{"location":"variables/#expression","text":"Since v3.1 The tag is substituted with the result of evaluating the tag as an expression. Note that any hyphenated parameter names or step names will cause a parsing error. You can reference them by indexing into the parameter or step map, e.g. inputs.parameters['my-param'] or steps['my-step'].outputs.result . Learn about the expression syntax .","title":"Expression"},{"location":"variables/#examples","text":"Plain list: [1, 2] Filter a list: filter([1, 2], { # > 1}) Map a list: map([1, 2], { # * 2 }) We provide some core functions: Cast to int: asInt(inputs.parameters['my-int-param']) Cast to float: asFloat(inputs.parameters['my-float-param']) Cast to string: string(1) Convert to a JSON string (needed for withParam ): toJson([1, 2]) Extract data from JSON: jsonpath(inputs.parameters.json, '$.some.path') You can also use Sprig functions : Trim a string: sprig.trim(inputs.parameters['my-string-param']) Sprig error handling Sprig functions often do not raise errors. For example, if int is used on an invalid value, it returns 0 . Please review the Sprig documentation to understand which functions raise errors and which do not.","title":"Examples"},{"location":"variables/#reference","text":"","title":"Reference"},{"location":"variables/#all-templates","text":"Variable Description inputs.parameters. Input parameter to a template inputs.parameters All input parameters to a template as a JSON string inputs.artifacts. Input artifact to a template node.name Full name of the node","title":"All Templates"},{"location":"variables/#steps-templates","text":"Variable Description steps.name Name of the step steps..id unique id of container step steps..ip IP address of a previous daemon container step steps..status Phase status of any previous step steps..exitCode Exit code of any previous script or container step steps..startedAt Time-stamp when the step started steps..finishedAt Time-stamp when the step finished steps..hostNodeName Host node where task ran (available from version 3.5) steps..outputs.result Output result of any previous container or script step steps..outputs.parameters When the previous step uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation steps..outputs.parameters. Output parameter of any previous step. When the previous step uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation steps..outputs.artifacts. Output artifact of any previous step","title":"Steps Templates"},{"location":"variables/#dag-templates","text":"Variable Description tasks.name Name of the task tasks..id unique id of container task tasks..ip IP address of a previous daemon container task tasks..status Phase status of any previous task tasks..exitCode Exit code of any previous script or container task tasks..startedAt Time-stamp when the task started tasks..finishedAt Time-stamp when the task finished tasks..hostNodeName Host node where task ran (available from version 3.5) tasks..outputs.result Output result of any previous container or script task tasks..outputs.parameters When the previous task uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation tasks..outputs.parameters. Output parameter of any previous task. When the previous task uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation tasks..outputs.artifacts. Output artifact of any previous task","title":"DAG Templates"},{"location":"variables/#http-templates","text":"Since v3.3 Only available for successCondition Variable Description request.method Request method ( string ) request.url Request URL ( string ) request.body Request body ( string ) request.headers Request headers ( map[string][]string ) response.statusCode Response status code ( int ) response.body Response body ( string ) response.headers Response headers ( map[string][]string )","title":"HTTP Templates"},{"location":"variables/#retrystrategy","text":"When using the expression field within retryStrategy , special variables are available. Variable Description lastRetry.exitCode Exit code of the last retry lastRetry.status Status of the last retry lastRetry.duration Duration in seconds of the last retry lastRetry.message Message output from the last retry (available from version 3.5) Note: These variables evaluate to a string type. If using advanced expressions, either cast them to int values ( expression: \"{{=asInt(lastRetry.exitCode) >= 2}}\" ) or compare them to string values ( expression: \"{{=lastRetry.exitCode != '2'}}\" ).","title":"RetryStrategy"},{"location":"variables/#containerscript-templates","text":"Variable Description pod.name Pod name of the container/script retries The retry number of the container/script if retryStrategy is specified inputs.artifacts..path Local path of the input artifact outputs.artifacts..path Local path of the output artifact outputs.parameters..path Local path of the output parameter","title":"Container/Script Templates"},{"location":"variables/#loops-withitems-withparam","text":"Variable Description item Value of the item in a list item. Field value of the item in a list of maps","title":"Loops (withItems / withParam)"},{"location":"variables/#metrics","text":"When emitting custom metrics in a template , special variables are available that allow self-reference to the current step. Variable Description status Phase status of the metric-emitting template duration Duration of the metric-emitting template in seconds (only applicable in Template -level metrics, for Workflow -level use workflow.duration ) exitCode Exit code of the metric-emitting template inputs.parameters. Input parameter of the metric-emitting template outputs.parameters. Output parameter of the metric-emitting template outputs.result Output result of the metric-emitting template resourcesDuration.{cpu,memory} Resources duration in seconds . Must be one of resourcesDuration.cpu or resourcesDuration.memory , if available. For more info, see the Resource Duration doc.","title":"Metrics"},{"location":"variables/#real-time-metrics","text":"Some variables can be emitted in real-time (as opposed to just when the step/task completes). To emit these variables in real time, set realtime: true under gauge (note: only Gauge metrics allow for real time variable emission). Metrics currently available for real time emission: For Workflow -level metrics: workflow.duration For Template -level metrics: duration","title":"Real-Time Metrics"},{"location":"variables/#global","text":"Variable Description workflow.name Workflow name workflow.namespace Workflow namespace workflow.mainEntrypoint Workflow's initial entrypoint workflow.serviceAccountName Workflow service account name workflow.uid Workflow UID. Useful for setting ownership reference to a resource, or a unique artifact location workflow.parameters. Input parameter to the workflow workflow.parameters All input parameters to the workflow as a JSON string (this is deprecated in favor of workflow.parameters.json as this doesn't work with expression tags and that does) workflow.parameters.json All input parameters to the workflow as a JSON string workflow.outputs.parameters. Global parameter in the workflow workflow.outputs.artifacts. Global artifact in the workflow workflow.annotations. Workflow annotations workflow.annotations.json all Workflow annotations as a JSON string workflow.labels. Workflow labels workflow.labels.json all Workflow labels as a JSON string workflow.creationTimestamp Workflow creation time-stamp formatted in RFC 3339 (e.g. 2018-08-23T05:42:49Z ) workflow.creationTimestamp. Creation time-stamp formatted with a strftime format character. workflow.creationTimestamp.RFC3339 Creation time-stamp formatted with in RFC 3339. workflow.priority Workflow priority workflow.duration Workflow duration estimate in seconds, may differ from actual duration by a couple of seconds workflow.scheduledTime Scheduled runtime formatted in RFC 3339 (only available for CronWorkflow )","title":"Global"},{"location":"variables/#exit-handler","text":"Variable Description workflow.status Workflow status. One of: Succeeded , Failed , Error workflow.failures A list of JSON objects containing information about nodes that failed or errored during execution. Available fields: displayName , message , templateName , phase , podName , and finishedAt .","title":"Exit Handler"},{"location":"variables/#knowing-where-you-are","text":"The idea with creating a WorkflowTemplate is that they are reusable bits of code you will use in many actual Workflows. Sometimes it is useful to know which workflow you are part of. workflow.mainEntrypoint is one way you can do this. If each of your actual workflows has a differing entrypoint, you can identify the workflow you're part of. Given this use in a WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : say-main-entrypoint spec : entrypoint : echo templates : - name : echo container : image : alpine command : [ echo ] args : [ \"{{workflow.mainEntrypoint}}\" ] I can distinguish my caller: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : foo- spec : entrypoint : foo templates : - name : foo steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of foo apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : bar- spec : entrypoint : bar templates : - name : bar steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of bar This shouldn't be that helpful in logging, you should be able to identify workflows through other labels in your cluster's log tool, but can be helpful when generating metrics for the workflow for example.","title":"Knowing where you are"},{"location":"webhooks/","text":"Webhooks \u00b6 v2.11 and after Many clients can send events via the events API endpoint using a standard authorization header. However, for clients that are unable to do so (e.g. because they use signature verification as proof of origin), additional configuration is required. In the namespace that will receive the event, create access token resources for your client: A role with permissions to get workflow templates and to create a workflow: example A service account for the client: example . A binding of the account to the role: example Additionally create: A secret named argo-workflows-webhook-clients listing the service accounts: example The secret argo-workflows-webhook-clients tells Argo: What type of webhook the account can be used for, e.g. github . What \"secret\" that webhook is configured for, e.g. in your Github settings page.","title":"Webhooks"},{"location":"webhooks/#webhooks","text":"v2.11 and after Many clients can send events via the events API endpoint using a standard authorization header. However, for clients that are unable to do so (e.g. because they use signature verification as proof of origin), additional configuration is required. In the namespace that will receive the event, create access token resources for your client: A role with permissions to get workflow templates and to create a workflow: example A service account for the client: example . A binding of the account to the role: example Additionally create: A secret named argo-workflows-webhook-clients listing the service accounts: example The secret argo-workflows-webhook-clients tells Argo: What type of webhook the account can be used for, e.g. github . What \"secret\" that webhook is configured for, e.g. in your Github settings page.","title":"Webhooks"},{"location":"widgets/","text":"Widgets \u00b6 v3.0 and after Widgets are intended to be embedded into other applications using inline frames ( iframe ). This may not work with your configuration. You may need to: Run the Argo Server with an account that can read workflows. That can be done using --auth-mode=server and configuring the argo-server service account. Run the Argo Server with --x-frame-options=SAMEORIGIN or --x-frame-options= .","title":"Widgets"},{"location":"widgets/#widgets","text":"v3.0 and after Widgets are intended to be embedded into other applications using inline frames ( iframe ). This may not work with your configuration. You may need to: Run the Argo Server with an account that can read workflows. That can be done using --auth-mode=server and configuring the argo-server service account. Run the Argo Server with --x-frame-options=SAMEORIGIN or --x-frame-options= .","title":"Widgets"},{"location":"windows/","text":"Windows Container Support \u00b6 The Argo server and the workflow controller currently only run on Linux. The workflow executor however also runs on Windows nodes, meaning you can use Windows containers inside your workflows! Here are the steps to get started. Requirements \u00b6 Kubernetes 1.14 or later, supporting Windows nodes Hybrid cluster containing Linux and Windows nodes like described in the Kubernetes docs Argo configured and running like described here Schedule workflows with Windows containers \u00b6 If you're running workflows in your hybrid Kubernetes cluster, always make sure to include a nodeSelector to run the steps on the correct host OS: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-windows- spec : entrypoint : hello-win templates : - name : hello-win nodeSelector : kubernetes.io/os : windows # specify the OS your step should run on container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] You can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-windows.yaml $ argo logs hello-windows-s9kk5 hello-windows-s9kk5: \"Hello from Windows Container!\" Schedule hybrid workflows \u00b6 You can also run different steps on different host operating systems. This can for example be very helpful when you need to compile your application on Windows and Linux. An example workflow can look like the following: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-hybrid- spec : entrypoint : mytemplate templates : - name : mytemplate steps : - - name : step1 template : hello-win - - name : step2 template : hello-linux - name : hello-win nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] - name : hello-linux nodeSelector : kubernetes.io/os : linux container : image : alpine command : [ echo ] args : [ \"Hello from Linux Container!\" ] Again, you can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-hybrid.yaml $ argo logs hello-hybrid-plqpp hello-hybrid-plqpp-1977432187: \"Hello from Windows Container!\" hello-hybrid-plqpp-764774907: Hello from Linux Container! Artifact mount path \u00b6 Artifacts work mostly the same way as on Linux. All paths get automatically mapped to the C: drive. For example: # ... - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at C:\\message - name : message path : \"/message\" # gets mapped to C:\\message nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"dir C:\\\\message\" ] # List the C:\\message directory Remember that volume mounts on Windows can only target a directory in the container, and not an individual file. Limitations \u00b6 Sharing process namespaces doesn't work on Windows so you can't use the Process Namespace Sharing (PNS) workflow executor. The executor Windows container is built using Nano Server as the base image. Running a newer windows version (e.g. 1909) is currently not confirmed to be working . If this is required, you need to build the executor container yourself by first adjusting the base image. Building the workflow executor image for Windows \u00b6 To build the workflow executor image for Windows you need a Windows machine running Windows Server 2019 with Docker installed like described in the docs . You then clone the project and run the Docker build with the Dockerfile for Windows and argoexec as a target: git clone https://github.com/argoproj/argo-workflows.git cd argo docker build -t myargoexec -f . \\D ockerfile.windows --target argoexec .","title":"Windows Container Support"},{"location":"windows/#windows-container-support","text":"The Argo server and the workflow controller currently only run on Linux. The workflow executor however also runs on Windows nodes, meaning you can use Windows containers inside your workflows! Here are the steps to get started.","title":"Windows Container Support"},{"location":"windows/#requirements","text":"Kubernetes 1.14 or later, supporting Windows nodes Hybrid cluster containing Linux and Windows nodes like described in the Kubernetes docs Argo configured and running like described here","title":"Requirements"},{"location":"windows/#schedule-workflows-with-windows-containers","text":"If you're running workflows in your hybrid Kubernetes cluster, always make sure to include a nodeSelector to run the steps on the correct host OS: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-windows- spec : entrypoint : hello-win templates : - name : hello-win nodeSelector : kubernetes.io/os : windows # specify the OS your step should run on container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] You can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-windows.yaml $ argo logs hello-windows-s9kk5 hello-windows-s9kk5: \"Hello from Windows Container!\"","title":"Schedule workflows with Windows containers"},{"location":"windows/#schedule-hybrid-workflows","text":"You can also run different steps on different host operating systems. This can for example be very helpful when you need to compile your application on Windows and Linux. An example workflow can look like the following: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-hybrid- spec : entrypoint : mytemplate templates : - name : mytemplate steps : - - name : step1 template : hello-win - - name : step2 template : hello-linux - name : hello-win nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] - name : hello-linux nodeSelector : kubernetes.io/os : linux container : image : alpine command : [ echo ] args : [ \"Hello from Linux Container!\" ] Again, you can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-hybrid.yaml $ argo logs hello-hybrid-plqpp hello-hybrid-plqpp-1977432187: \"Hello from Windows Container!\" hello-hybrid-plqpp-764774907: Hello from Linux Container!","title":"Schedule hybrid workflows"},{"location":"windows/#artifact-mount-path","text":"Artifacts work mostly the same way as on Linux. All paths get automatically mapped to the C: drive. For example: # ... - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at C:\\message - name : message path : \"/message\" # gets mapped to C:\\message nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"dir C:\\\\message\" ] # List the C:\\message directory Remember that volume mounts on Windows can only target a directory in the container, and not an individual file.","title":"Artifact mount path"},{"location":"windows/#limitations","text":"Sharing process namespaces doesn't work on Windows so you can't use the Process Namespace Sharing (PNS) workflow executor. The executor Windows container is built using Nano Server as the base image. Running a newer windows version (e.g. 1909) is currently not confirmed to be working . If this is required, you need to build the executor container yourself by first adjusting the base image.","title":"Limitations"},{"location":"windows/#building-the-workflow-executor-image-for-windows","text":"To build the workflow executor image for Windows you need a Windows machine running Windows Server 2019 with Docker installed like described in the docs . You then clone the project and run the Docker build with the Dockerfile for Windows and argoexec as a target: git clone https://github.com/argoproj/argo-workflows.git cd argo docker build -t myargoexec -f . \\D ockerfile.windows --target argoexec .","title":"Building the workflow executor image for Windows"},{"location":"work-avoidance/","text":"Work Avoidance \u00b6 v2.9 and after You can make workflows faster and more robust by employing work avoidance . A workflow that utilizes this is simply a workflow containing steps that do not run if the work has already been done. This is a technique is similar to memoization . Work avoidance is totally in your control and you make the decisions as to have to skip the work. Memoization is a feature of Argo Workflows to automatically skip steps which generate outputs. Prior to version 3.5 this required outputs to be specified, but you can use memoization for all steps and tasks in version 3.5 or later. This simplest way to do this is to use marker files . Use cases: An expensive step appears across multiple workflows - you want to avoid repeating them. A workflow has unreliable tasks - you want to be able to resubmit the workflow. A marker file is a file that indicates the work has already been done. Before doing the work you check to see if the marker has already been done: if [ -e /work/markers/name-of-task ] ; then echo \"work already done\" exit 0 fi echo \"working very hard\" touch /work/markers/name-of-task Choose a name for the file that is unique for the task, e.g. the template name and all the parameters: touch /work/markers/ $( date +%Y-%m-%d ) -echo- {{ inputs.parameters.num }} You need to store the marker files between workflows and this can be achieved using a PVC and optional input artifact . This complete work avoidance example has the following: A PVC to store the markers on. A load-markers step that loads the marker files from artifact storage. Multiple echo tasks that avoid work using marker files. A save-markers exit handler to save the marker files, even if they are not needed.","title":"Work Avoidance"},{"location":"work-avoidance/#work-avoidance","text":"v2.9 and after You can make workflows faster and more robust by employing work avoidance . A workflow that utilizes this is simply a workflow containing steps that do not run if the work has already been done. This is a technique is similar to memoization . Work avoidance is totally in your control and you make the decisions as to have to skip the work. Memoization is a feature of Argo Workflows to automatically skip steps which generate outputs. Prior to version 3.5 this required outputs to be specified, but you can use memoization for all steps and tasks in version 3.5 or later. This simplest way to do this is to use marker files . Use cases: An expensive step appears across multiple workflows - you want to avoid repeating them. A workflow has unreliable tasks - you want to be able to resubmit the workflow. A marker file is a file that indicates the work has already been done. Before doing the work you check to see if the marker has already been done: if [ -e /work/markers/name-of-task ] ; then echo \"work already done\" exit 0 fi echo \"working very hard\" touch /work/markers/name-of-task Choose a name for the file that is unique for the task, e.g. the template name and all the parameters: touch /work/markers/ $( date +%Y-%m-%d ) -echo- {{ inputs.parameters.num }} You need to store the marker files between workflows and this can be achieved using a PVC and optional input artifact . This complete work avoidance example has the following: A PVC to store the markers on. A load-markers step that loads the marker files from artifact storage. Multiple echo tasks that avoid work using marker files. A save-markers exit handler to save the marker files, even if they are not needed.","title":"Work Avoidance"},{"location":"workflow-archive/","text":"Workflow Archive \u00b6 v2.5 and after If you want to keep completed workflows for a long time, you can use the workflow archive to save them in a Postgres or MySQL (>= 5.7.8) database. The workflow archive stores the status of the workflow, which pods have been executed, what was the result etc. The job logs of the workflow pods will not be archived. If you need to save the logs of the pods, you must setup an artifact repository according to this doc . The quick-start deployment includes a Postgres database server. In this case the workflow archive is already enabled. Such a deployment is convenient for test environments, but in a production environment you must use a production quality database service. Enabling Workflow Archive \u00b6 To enable archiving of the workflows, you must configure database parameters in the persistence section of your configuration and set archive: to true . Example: persistence : archive : true postgresql : host : localhost port : 5432 database : postgres tableName : argo_workflows userNameSecret : name : argo - postgres - config key : username passwordSecret : name : argo - postgres - config key : password You must also create the secret with database user and password in the namespace of the workflow controller. Example: kubectl create secret generic argo-postgres-config -n argo --from-literal=password=mypassword --from-literal=username=argodbuser Note that IAM-based authentication is not currently supported. However, you can start your database proxy as a sidecar (e.g. via CloudSQL Proxy on GCP) and then specify your local proxy address, IAM username, and an empty string as your password in the persistence configuration to connect to it. The following tables will be created in the database when you start the workflow controller with enabled archive: argo_workflows argo_archived_workflows argo_archived_workflows_labels schema_history Automatic Database Migration \u00b6 Every time the Argo workflow-controller starts with persistence enabled, it tries to migrate the database to the correct version. If the database migration fails, the workflow-controller will also fail to start. In this case you can delete all the above tables and restart the workflow-controller. If you know what are you doing you also have an option to skip migration: persistence : skipMigration : true Required database permissions \u00b6 Postgres \u00b6 The database user/role must have CREATE and USAGE permissions on the public schema of the database so that the tables can be created during the migration. Archive TTL \u00b6 You can configure the time period to keep archived workflows before they will be deleted by the archived workflow garbage collection function. The default is forever. Example: persistence : archiveTTL : 10 d The ARCHIVED_WORKFLOW_GC_PERIOD variable defines the periodicity of running the garbage collection function. The default value is documented here . When the workflow controller starts, it sets the ticker to run every ARCHIVED_WORKFLOW_GC_PERIOD . It does not run the garbage collection function immediately and the first garbage collection happens only after the period defined in the ARCHIVED_WORKFLOW_GC_PERIOD variable. Cluster Name \u00b6 Optionally you can set a unique name of your Kubernetes cluster. This name will populate the clustername field in the argo_archived_workflows table. Example: persistence : clusterName : dev - cluster Disabling Workflow Archive \u00b6 To disable archiving of the workflows, set archive: to false in the persistence section of your configuration . Example: persistence : archive : false","title":"Workflow Archive"},{"location":"workflow-archive/#workflow-archive","text":"v2.5 and after If you want to keep completed workflows for a long time, you can use the workflow archive to save them in a Postgres or MySQL (>= 5.7.8) database. The workflow archive stores the status of the workflow, which pods have been executed, what was the result etc. The job logs of the workflow pods will not be archived. If you need to save the logs of the pods, you must setup an artifact repository according to this doc . The quick-start deployment includes a Postgres database server. In this case the workflow archive is already enabled. Such a deployment is convenient for test environments, but in a production environment you must use a production quality database service.","title":"Workflow Archive"},{"location":"workflow-archive/#enabling-workflow-archive","text":"To enable archiving of the workflows, you must configure database parameters in the persistence section of your configuration and set archive: to true . Example: persistence : archive : true postgresql : host : localhost port : 5432 database : postgres tableName : argo_workflows userNameSecret : name : argo - postgres - config key : username passwordSecret : name : argo - postgres - config key : password You must also create the secret with database user and password in the namespace of the workflow controller. Example: kubectl create secret generic argo-postgres-config -n argo --from-literal=password=mypassword --from-literal=username=argodbuser Note that IAM-based authentication is not currently supported. However, you can start your database proxy as a sidecar (e.g. via CloudSQL Proxy on GCP) and then specify your local proxy address, IAM username, and an empty string as your password in the persistence configuration to connect to it. The following tables will be created in the database when you start the workflow controller with enabled archive: argo_workflows argo_archived_workflows argo_archived_workflows_labels schema_history","title":"Enabling Workflow Archive"},{"location":"workflow-archive/#automatic-database-migration","text":"Every time the Argo workflow-controller starts with persistence enabled, it tries to migrate the database to the correct version. If the database migration fails, the workflow-controller will also fail to start. In this case you can delete all the above tables and restart the workflow-controller. If you know what are you doing you also have an option to skip migration: persistence : skipMigration : true","title":"Automatic Database Migration"},{"location":"workflow-archive/#required-database-permissions","text":"","title":"Required database permissions"},{"location":"workflow-archive/#postgres","text":"The database user/role must have CREATE and USAGE permissions on the public schema of the database so that the tables can be created during the migration.","title":"Postgres"},{"location":"workflow-archive/#archive-ttl","text":"You can configure the time period to keep archived workflows before they will be deleted by the archived workflow garbage collection function. The default is forever. Example: persistence : archiveTTL : 10 d The ARCHIVED_WORKFLOW_GC_PERIOD variable defines the periodicity of running the garbage collection function. The default value is documented here . When the workflow controller starts, it sets the ticker to run every ARCHIVED_WORKFLOW_GC_PERIOD . It does not run the garbage collection function immediately and the first garbage collection happens only after the period defined in the ARCHIVED_WORKFLOW_GC_PERIOD variable.","title":"Archive TTL"},{"location":"workflow-archive/#cluster-name","text":"Optionally you can set a unique name of your Kubernetes cluster. This name will populate the clustername field in the argo_archived_workflows table. Example: persistence : clusterName : dev - cluster","title":"Cluster Name"},{"location":"workflow-archive/#disabling-workflow-archive","text":"To disable archiving of the workflows, set archive: to false in the persistence section of your configuration . Example: persistence : archive : false","title":"Disabling Workflow Archive"},{"location":"workflow-concepts/","text":"Core Concepts \u00b6 This page serves as an introduction to the core concepts of Argo. The Workflow \u00b6 The Workflow is the most important resource in Argo and serves two important functions: It defines the workflow to be executed. It stores the state of the workflow. Because of these dual responsibilities, a Workflow should be treated as a \"live\" object. It is not only a static definition, but is also an \"instance\" of said definition. (If it isn't clear what this means, it will be explained below). Workflow Spec \u00b6 The workflow to be executed is defined in the Workflow.spec field. The core structure of a Workflow spec is a list of templates and an entrypoint . templates can be loosely thought of as \"functions\": they define instructions to be executed. The entrypoint field defines what the \"main\" function will be \u2013 that is, the template that will be executed first. Here is an example of a simple Workflow spec with a single template : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world- # Name of this Workflow spec : entrypoint : whalesay # Defines \"whalesay\" as the \"main\" template templates : - name : whalesay # Defining the \"whalesay\" template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] # This template runs \"cowsay\" in the \"whalesay\" image with arguments \"hello world\" template Types \u00b6 There are 6 types of templates, divided into two different categories. Template Definitions \u00b6 These templates define work to be done, usually in a Container. Container \u00b6 Perhaps the most common template type, it will schedule a Container. The spec of the template is the same as the Kubernetes container spec , so you can define a container here the same way you do anywhere else in Kubernetes. Example: - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] Script \u00b6 A convenience wrapper around a container . The spec is the same as a container, but adds the source: field which allows you to define a script in-place. The script will be saved into a file and executed for you. The result of the script is automatically exported into an Argo variable either {{tasks..outputs.result}} or {{steps..outputs.result}} , depending how it was called. Example: - name : gen-random-int script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i) Resource \u00b6 Performs operations on cluster Resources directly. It can be used to get, create, apply, delete, replace, or patch resources on your cluster. This example creates a ConfigMap resource on the cluster: - name : k8s-owner-reference resource : action : create manifest : | apiVersion: v1 kind: ConfigMap metadata: generateName: owned-eg- data: some: value Suspend \u00b6 A suspend template will suspend execution, either for a duration or until it is resumed manually. Suspend templates can be resumed from the CLI (with argo resume ), the API endpoint , or the UI. Example: - name : delay suspend : duration : \"20s\" Template Invocators \u00b6 These templates are used to invoke/call other templates and provide execution control. Steps \u00b6 A steps template allows you to define your tasks in a series of steps. The structure of the template is a \"list of lists\". Outer lists will run sequentially and inner lists will run in parallel. If you want to run inner lists one by one, use the Synchronization feature. You can set a wide array of options to control execution, such as when: clauses to conditionally execute a step . In this example step1 runs first. Once it is completed, step2a and step2b will run in parallel: - name : hello-hello-hello steps : - - name : step1 template : prepare-data - - name : step2a template : run-data-first-half - name : step2b template : run-data-second-half DAG \u00b6 A dag template allows you to define your tasks as a graph of dependencies. In a DAG, you list all your tasks and set which other tasks must complete before a particular task can begin. Tasks without any dependencies will be run immediately. In this example A runs first. Once it is completed, B and C will run in parallel and once they both complete, D will run: - name : diamond dag : tasks : - name : A template : echo - name : B dependencies : [ A ] template : echo - name : C dependencies : [ A ] template : echo - name : D dependencies : [ B , C ] template : echo Architecture \u00b6 If you are interested in Argo's underlying architecture, see Architecture .","title":"Core Concepts"},{"location":"workflow-concepts/#core-concepts","text":"This page serves as an introduction to the core concepts of Argo.","title":"Core Concepts"},{"location":"workflow-concepts/#the-workflow","text":"The Workflow is the most important resource in Argo and serves two important functions: It defines the workflow to be executed. It stores the state of the workflow. Because of these dual responsibilities, a Workflow should be treated as a \"live\" object. It is not only a static definition, but is also an \"instance\" of said definition. (If it isn't clear what this means, it will be explained below).","title":"The Workflow"},{"location":"workflow-concepts/#workflow-spec","text":"The workflow to be executed is defined in the Workflow.spec field. The core structure of a Workflow spec is a list of templates and an entrypoint . templates can be loosely thought of as \"functions\": they define instructions to be executed. The entrypoint field defines what the \"main\" function will be \u2013 that is, the template that will be executed first. Here is an example of a simple Workflow spec with a single template : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world- # Name of this Workflow spec : entrypoint : whalesay # Defines \"whalesay\" as the \"main\" template templates : - name : whalesay # Defining the \"whalesay\" template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] # This template runs \"cowsay\" in the \"whalesay\" image with arguments \"hello world\"","title":"Workflow Spec"},{"location":"workflow-concepts/#template-types","text":"There are 6 types of templates, divided into two different categories.","title":"template Types"},{"location":"workflow-concepts/#template-definitions","text":"These templates define work to be done, usually in a Container.","title":"Template Definitions"},{"location":"workflow-concepts/#container","text":"Perhaps the most common template type, it will schedule a Container. The spec of the template is the same as the Kubernetes container spec , so you can define a container here the same way you do anywhere else in Kubernetes. Example: - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ]","title":"Container"},{"location":"workflow-concepts/#script","text":"A convenience wrapper around a container . The spec is the same as a container, but adds the source: field which allows you to define a script in-place. The script will be saved into a file and executed for you. The result of the script is automatically exported into an Argo variable either {{tasks..outputs.result}} or {{steps..outputs.result}} , depending how it was called. Example: - name : gen-random-int script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i)","title":"Script"},{"location":"workflow-concepts/#resource","text":"Performs operations on cluster Resources directly. It can be used to get, create, apply, delete, replace, or patch resources on your cluster. This example creates a ConfigMap resource on the cluster: - name : k8s-owner-reference resource : action : create manifest : | apiVersion: v1 kind: ConfigMap metadata: generateName: owned-eg- data: some: value","title":"Resource"},{"location":"workflow-concepts/#suspend","text":"A suspend template will suspend execution, either for a duration or until it is resumed manually. Suspend templates can be resumed from the CLI (with argo resume ), the API endpoint , or the UI. Example: - name : delay suspend : duration : \"20s\"","title":"Suspend"},{"location":"workflow-concepts/#template-invocators","text":"These templates are used to invoke/call other templates and provide execution control.","title":"Template Invocators"},{"location":"workflow-concepts/#steps","text":"A steps template allows you to define your tasks in a series of steps. The structure of the template is a \"list of lists\". Outer lists will run sequentially and inner lists will run in parallel. If you want to run inner lists one by one, use the Synchronization feature. You can set a wide array of options to control execution, such as when: clauses to conditionally execute a step . In this example step1 runs first. Once it is completed, step2a and step2b will run in parallel: - name : hello-hello-hello steps : - - name : step1 template : prepare-data - - name : step2a template : run-data-first-half - name : step2b template : run-data-second-half","title":"Steps"},{"location":"workflow-concepts/#dag","text":"A dag template allows you to define your tasks as a graph of dependencies. In a DAG, you list all your tasks and set which other tasks must complete before a particular task can begin. Tasks without any dependencies will be run immediately. In this example A runs first. Once it is completed, B and C will run in parallel and once they both complete, D will run: - name : diamond dag : tasks : - name : A template : echo - name : B dependencies : [ A ] template : echo - name : C dependencies : [ A ] template : echo - name : D dependencies : [ B , C ] template : echo","title":"DAG"},{"location":"workflow-concepts/#architecture","text":"If you are interested in Argo's underlying architecture, see Architecture .","title":"Architecture"},{"location":"workflow-controller-configmap/","text":"Workflow Controller Config Map \u00b6 Introduction \u00b6 The Workflow Controller Config Map is used to set controller-wide settings. For a detailed example, please see workflow-controller-configmap.yaml . Alternate Structure \u00b6 In all versions, the configuration may be under a config: | key: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | instanceID: my-ci-controller artifactRepository: archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey In version 2.7+, the config: | key is optional. However, if the config: | key is not used, all nested maps under top level keys should be strings. This makes it easier to generate the map with some configuration management tools like Kustomize. # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # \"config: |\" key is optional in 2.7+! instanceID : my-ci-controller artifactRepository : | # However, all nested maps must be strings archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey","title":"Workflow Controller Config Map"},{"location":"workflow-controller-configmap/#workflow-controller-config-map","text":"","title":"Workflow Controller Config Map"},{"location":"workflow-controller-configmap/#introduction","text":"The Workflow Controller Config Map is used to set controller-wide settings. For a detailed example, please see workflow-controller-configmap.yaml .","title":"Introduction"},{"location":"workflow-controller-configmap/#alternate-structure","text":"In all versions, the configuration may be under a config: | key: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | instanceID: my-ci-controller artifactRepository: archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey In version 2.7+, the config: | key is optional. However, if the config: | key is not used, all nested maps under top level keys should be strings. This makes it easier to generate the map with some configuration management tools like Kustomize. # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # \"config: |\" key is optional in 2.7+! instanceID : my-ci-controller artifactRepository : | # However, all nested maps must be strings archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey","title":"Alternate Structure"},{"location":"workflow-creator/","text":"Workflow Creator \u00b6 v2.9 and after If you create your workflow via the CLI or UI, an attempt will be made to label it with the user who created it apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : my-wf labels : workflows.argoproj.io/creator : admin # labels must be DNS formatted, so the \"@\" is replaces by '.at.' workflows.argoproj.io/creator-email : admin.at.your.org workflows.argoproj.io/creator-preferred-username : admin-preferred-username Note Labels only contain [-_.0-9a-zA-Z] , so any other characters will be turned into - .","title":"Workflow Creator"},{"location":"workflow-creator/#workflow-creator","text":"v2.9 and after If you create your workflow via the CLI or UI, an attempt will be made to label it with the user who created it apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : my-wf labels : workflows.argoproj.io/creator : admin # labels must be DNS formatted, so the \"@\" is replaces by '.at.' workflows.argoproj.io/creator-email : admin.at.your.org workflows.argoproj.io/creator-preferred-username : admin-preferred-username Note Labels only contain [-_.0-9a-zA-Z] , so any other characters will be turned into - .","title":"Workflow Creator"},{"location":"workflow-events/","text":"Workflow Events \u00b6 v2.7.2 \u26a0\ufe0f Do not use Kubernetes events for automation. Events maybe lost or rolled-up. We emit Kubernetes events on certain events. Workflow state change: WorkflowRunning WorkflowSucceeded WorkflowFailed WorkflowTimedOut Node state change: WorkflowNodeRunning WorkflowNodeSucceeded WorkflowNodeFailed WorkflowNodeError The involved object is the workflow in both cases. Additionally, for node state change events, annotations indicate the name and type of the involved node: metadata : name : my-wf.160434cb3af841f8 namespace : my-ns annotations : workflows.argoproj.io/node-name : my-node workflows.argoproj.io/node-type : Pod type : Normal reason : WorkflowNodeSucceeded message : 'Succeeded node my-node: my message' involvedObject : apiVersion : v1alpha1 kind : Workflow name : my-wf namespace : my-ns resourceVersion : \"1234\" uid : my-uid firstTimestamp : \"2020-04-09T16:50:16Z\" lastTimestamp : \"2020-04-09T16:50:16Z\" count : 1","title":"Workflow Events"},{"location":"workflow-events/#workflow-events","text":"v2.7.2 \u26a0\ufe0f Do not use Kubernetes events for automation. Events maybe lost or rolled-up. We emit Kubernetes events on certain events. Workflow state change: WorkflowRunning WorkflowSucceeded WorkflowFailed WorkflowTimedOut Node state change: WorkflowNodeRunning WorkflowNodeSucceeded WorkflowNodeFailed WorkflowNodeError The involved object is the workflow in both cases. Additionally, for node state change events, annotations indicate the name and type of the involved node: metadata : name : my-wf.160434cb3af841f8 namespace : my-ns annotations : workflows.argoproj.io/node-name : my-node workflows.argoproj.io/node-type : Pod type : Normal reason : WorkflowNodeSucceeded message : 'Succeeded node my-node: my message' involvedObject : apiVersion : v1alpha1 kind : Workflow name : my-wf namespace : my-ns resourceVersion : \"1234\" uid : my-uid firstTimestamp : \"2020-04-09T16:50:16Z\" lastTimestamp : \"2020-04-09T16:50:16Z\" count : 1","title":"Workflow Events"},{"location":"workflow-executors/","text":"Workflow Executors \u00b6 A workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container life-cycles, etc. The executor to be used in your workflows can be changed in the config map under the containerRuntimeExecutor key (removed in v3.4). Emissary (emissary) \u00b6 v3.1 and after Default in >= v3.3. This is the most fully featured executor. Reliability: Works on GKE Autopilot Does not require init process to kill sub-processes. More secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot . Scalable: It reads and writes to and from the container's disk and typically does not use any network APIs unless resource type template is used. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: command should be specified for containers. You can determine values as follows: docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' argoproj/argosay:v2 Learn more about command and args Image Index/Cache \u00b6 If you don't provide command to run, the emissary will grab it from container image. You can also specify it using the workflow spec or emissary will look it up in the image index . This is nothing more fancy than a configuration item . Emissary will create a cache entry, using image with version as key and command as value, and it will reuse it for specific image/version. Exit Code 64 \u00b6 The emissary will exit with code 64 if it fails. This may indicate a bug in the emissary. Docker (docker) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. Default in <= v3.2. Least secure: It requires privileged access to docker.sock of the host to be mounted which. Often rejected by Open Policy Agent (OPA) or your Pod Security Policy (PSP). It can escape the privileges of the pod's service account It cannot runAsNonRoot . Equal most scalable: It communicates directly with the local Docker daemon. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: No additional configuration needed. Note : when using docker as workflow executors, messages printed in both stdout and stderr are captured in the Argo variable .outputs.result . Kubelet (kubelet) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. Secure No privileged access Cannot escape the privileges of the pod's service account runAsNonRoot - TBD, see #4186 Scalable: Operations performed against the local Kubelet Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: Additional Kubelet configuration maybe needed Kubernetes API ( k8sapi ) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. Reliability: Works on GKE Autopilot Most secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot Least scalable: Log retrieval and container operations performed against the remote Kubernetes API Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: No additional configuration needed. Process Namespace Sharing ( pns ) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. More secure: No privileged access cannot escape the privileges of the pod's service account Can runAsNonRoot , if you use volumes (e.g. empty-dir ) for your output artifacts Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions. Scalable: Most operations use local procfs . Log retrieval uses the remote Kubernetes API Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ) Cannot capture artifacts from a base layer which has a volume mounted under it Cannot capture artifacts from base layer if the container is short-lived. Configuration: No additional configuration needed. Process will no longer run with PID 1 Doesn't work for Windows containers . Learn more","title":"Workflow Executors"},{"location":"workflow-executors/#workflow-executors","text":"A workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container life-cycles, etc. The executor to be used in your workflows can be changed in the config map under the containerRuntimeExecutor key (removed in v3.4).","title":"Workflow Executors"},{"location":"workflow-executors/#emissary-emissary","text":"v3.1 and after Default in >= v3.3. This is the most fully featured executor. Reliability: Works on GKE Autopilot Does not require init process to kill sub-processes. More secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot . Scalable: It reads and writes to and from the container's disk and typically does not use any network APIs unless resource type template is used. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: command should be specified for containers. You can determine values as follows: docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' argoproj/argosay:v2 Learn more about command and args","title":"Emissary (emissary)"},{"location":"workflow-executors/#image-indexcache","text":"If you don't provide command to run, the emissary will grab it from container image. You can also specify it using the workflow spec or emissary will look it up in the image index . This is nothing more fancy than a configuration item . Emissary will create a cache entry, using image with version as key and command as value, and it will reuse it for specific image/version.","title":"Image Index/Cache"},{"location":"workflow-executors/#exit-code-64","text":"The emissary will exit with code 64 if it fails. This may indicate a bug in the emissary.","title":"Exit Code 64"},{"location":"workflow-executors/#docker-docker","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. Default in <= v3.2. Least secure: It requires privileged access to docker.sock of the host to be mounted which. Often rejected by Open Policy Agent (OPA) or your Pod Security Policy (PSP). It can escape the privileges of the pod's service account It cannot runAsNonRoot . Equal most scalable: It communicates directly with the local Docker daemon. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: No additional configuration needed. Note : when using docker as workflow executors, messages printed in both stdout and stderr are captured in the Argo variable .outputs.result .","title":"Docker (docker)"},{"location":"workflow-executors/#kubelet-kubelet","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. Secure No privileged access Cannot escape the privileges of the pod's service account runAsNonRoot - TBD, see #4186 Scalable: Operations performed against the local Kubelet Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: Additional Kubelet configuration maybe needed","title":"Kubelet (kubelet)"},{"location":"workflow-executors/#kubernetes-api-k8sapi","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. Reliability: Works on GKE Autopilot Most secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot Least scalable: Log retrieval and container operations performed against the remote Kubernetes API Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: No additional configuration needed.","title":"Kubernetes API (k8sapi)"},{"location":"workflow-executors/#process-namespace-sharing-pns","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. More secure: No privileged access cannot escape the privileges of the pod's service account Can runAsNonRoot , if you use volumes (e.g. empty-dir ) for your output artifacts Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions. Scalable: Most operations use local procfs . Log retrieval uses the remote Kubernetes API Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ) Cannot capture artifacts from a base layer which has a volume mounted under it Cannot capture artifacts from base layer if the container is short-lived. Configuration: No additional configuration needed. Process will no longer run with PID 1 Doesn't work for Windows containers . Learn more","title":"Process Namespace Sharing (pns)"},{"location":"workflow-inputs/","text":"Workflow Inputs \u00b6 Introduction \u00b6 Workflows and template s operate on a set of defined parameters and arguments that are supplied to the running container. The precise details of how to manage the inputs can be confusing; this article attempts to clarify concepts and provide simple working examples to illustrate the various configuration options. The examples below are limited to DAGTemplate s and mainly focused on parameters , but similar reasoning applies to the other types of template s. Parameter Inputs \u00b6 First, some clarification of terms is needed. For a glossary reference, see Argo Core Concepts . A workflow provides arguments , which are passed in to the entry point template. A template defines inputs which are then provided by template callers (such as steps , dag , or even a workflow ). The structure of both is identical. For example, in a Workflow , one parameter would look like this: arguments : parameters : - name : workflow-param-1 And in a template : inputs : parameters : - name : template-param-1 Inputs to DAGTemplate s use the arguments format: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : abcd Previous examples in context: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : example- spec : entrypoint : main arguments : parameters : - name : workflow-param-1 templates : - name : main dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-template-a inputs : parameters : - name : template-param-1 script : image : alpine command : [ /bin/sh ] source : | echo \"{{inputs.parameters.template-param-1}}\" To run this example: argo submit -n argo example.yaml -p 'workflow-param-1=\"abcd\"' --watch Using Previous Step Outputs As Inputs \u00b6 In DAGTemplate s, it is common to want to take the output of one step and send it as the input to another step. However, there is a difference in how this works for artifacts vs parameters. Suppose our step-template-a defines some outputs: outputs : parameters : - name : output-param-1 valueFrom : path : /p1.txt artifacts : - name : output-artifact-1 path : /some-directory In my DAGTemplate , I can send these outputs to another template like this: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-B dependencies : [ step-A ] template : step-template-b arguments : parameters : - name : template-param-2 value : \"{{tasks.step-A.outputs.parameters.output-param-1}}\" artifacts : - name : input-artifact-1 from : \"{{tasks.step-A.outputs.artifacts.output-artifact-1}}\" Note the important distinction between parameters and artifacts ; they both share the name field, but one uses value and the other uses from .","title":"Workflow Inputs"},{"location":"workflow-inputs/#workflow-inputs","text":"","title":"Workflow Inputs"},{"location":"workflow-inputs/#introduction","text":"Workflows and template s operate on a set of defined parameters and arguments that are supplied to the running container. The precise details of how to manage the inputs can be confusing; this article attempts to clarify concepts and provide simple working examples to illustrate the various configuration options. The examples below are limited to DAGTemplate s and mainly focused on parameters , but similar reasoning applies to the other types of template s.","title":"Introduction"},{"location":"workflow-inputs/#parameter-inputs","text":"First, some clarification of terms is needed. For a glossary reference, see Argo Core Concepts . A workflow provides arguments , which are passed in to the entry point template. A template defines inputs which are then provided by template callers (such as steps , dag , or even a workflow ). The structure of both is identical. For example, in a Workflow , one parameter would look like this: arguments : parameters : - name : workflow-param-1 And in a template : inputs : parameters : - name : template-param-1 Inputs to DAGTemplate s use the arguments format: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : abcd Previous examples in context: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : example- spec : entrypoint : main arguments : parameters : - name : workflow-param-1 templates : - name : main dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-template-a inputs : parameters : - name : template-param-1 script : image : alpine command : [ /bin/sh ] source : | echo \"{{inputs.parameters.template-param-1}}\" To run this example: argo submit -n argo example.yaml -p 'workflow-param-1=\"abcd\"' --watch","title":"Parameter Inputs"},{"location":"workflow-inputs/#using-previous-step-outputs-as-inputs","text":"In DAGTemplate s, it is common to want to take the output of one step and send it as the input to another step. However, there is a difference in how this works for artifacts vs parameters. Suppose our step-template-a defines some outputs: outputs : parameters : - name : output-param-1 valueFrom : path : /p1.txt artifacts : - name : output-artifact-1 path : /some-directory In my DAGTemplate , I can send these outputs to another template like this: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-B dependencies : [ step-A ] template : step-template-b arguments : parameters : - name : template-param-2 value : \"{{tasks.step-A.outputs.parameters.output-param-1}}\" artifacts : - name : input-artifact-1 from : \"{{tasks.step-A.outputs.artifacts.output-artifact-1}}\" Note the important distinction between parameters and artifacts ; they both share the name field, but one uses value and the other uses from .","title":"Using Previous Step Outputs As Inputs"},{"location":"workflow-notifications/","text":"Workflow Notifications \u00b6 There are a number of use cases where you may wish to notify an external system when a workflow completes: Send an email. Send a Slack (or other instant message). Send a message to Kafka (or other message bus). You have options: For individual workflows, can add an exit handler to your workflow, such as in this example . If you want the same for every workflow, you can add an exit handler to the default workflow spec . Use a service (e.g. Heptio Labs EventRouter ) to the Workflow events we emit.","title":"Workflow Notifications"},{"location":"workflow-notifications/#workflow-notifications","text":"There are a number of use cases where you may wish to notify an external system when a workflow completes: Send an email. Send a Slack (or other instant message). Send a message to Kafka (or other message bus). You have options: For individual workflows, can add an exit handler to your workflow, such as in this example . If you want the same for every workflow, you can add an exit handler to the default workflow spec . Use a service (e.g. Heptio Labs EventRouter ) to the Workflow events we emit.","title":"Workflow Notifications"},{"location":"workflow-of-workflows/","text":"Workflow of Workflows \u00b6 v2.9 and after Introduction \u00b6 The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting on their results. Examples \u00b6 You can use workflowTemplateRef to trigger a workflow inline. Define your workflow as a workflowtemplate . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Create the Workflowtemplate in cluster using argo template create Define the workflow of workflows. # This template demonstrates a workflow of workflows. # Workflow triggers one or more workflows and manages them. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-of-workflows- spec : entrypoint : main templates : - name : main steps : - - name : workflow1 template : resource-without-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - - name : workflow2 template : resource-with-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - name : message value : \"Welcome Argo\" - name : resource-without-argument inputs : parameters : - name : workflowtemplate resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-1- spec: workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error) - name : resource-with-argument inputs : parameters : - name : workflowtemplate - name : message resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-2- spec: arguments: parameters: - name: message value: {{inputs.parameters.message}} workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error)","title":"Workflow of Workflows"},{"location":"workflow-of-workflows/#workflow-of-workflows","text":"v2.9 and after","title":"Workflow of Workflows"},{"location":"workflow-of-workflows/#introduction","text":"The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting on their results.","title":"Introduction"},{"location":"workflow-of-workflows/#examples","text":"You can use workflowTemplateRef to trigger a workflow inline. Define your workflow as a workflowtemplate . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Create the Workflowtemplate in cluster using argo template create Define the workflow of workflows. # This template demonstrates a workflow of workflows. # Workflow triggers one or more workflows and manages them. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-of-workflows- spec : entrypoint : main templates : - name : main steps : - - name : workflow1 template : resource-without-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - - name : workflow2 template : resource-with-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - name : message value : \"Welcome Argo\" - name : resource-without-argument inputs : parameters : - name : workflowtemplate resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-1- spec: workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error) - name : resource-with-argument inputs : parameters : - name : workflowtemplate - name : message resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-2- spec: arguments: parameters: - name: message value: {{inputs.parameters.message}} workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error)","title":"Examples"},{"location":"workflow-pod-security-context/","text":"Workflow Pod Security Context \u00b6 By default, all workflow pods run as root. The Docker executor even requires privileged: true . For other workflow executors , you can run your workflow pods more securely by configuring the security context for your workflow pod. This is likely to be necessary if you have a pod security policy . You probably can't use the Docker executor if you have a pod security policy. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : security-context- spec : securityContext : runAsNonRoot : true runAsUser : 8737 #; any non-root user You can configure this globally using workflow defaults . It is easy to make a workflow need root unintentionally You may find that user's workflows have been written to require root with seemingly innocuous code. E.g. mkdir /my-dir would require root. You must use volumes for output artifacts If you use runAsNonRoot - you cannot have output artifacts on base layer (e.g. /tmp ). You must use a volume (e.g. empty dir ).","title":"Workflow Pod Security Context"},{"location":"workflow-pod-security-context/#workflow-pod-security-context","text":"By default, all workflow pods run as root. The Docker executor even requires privileged: true . For other workflow executors , you can run your workflow pods more securely by configuring the security context for your workflow pod. This is likely to be necessary if you have a pod security policy . You probably can't use the Docker executor if you have a pod security policy. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : security-context- spec : securityContext : runAsNonRoot : true runAsUser : 8737 #; any non-root user You can configure this globally using workflow defaults . It is easy to make a workflow need root unintentionally You may find that user's workflows have been written to require root with seemingly innocuous code. E.g. mkdir /my-dir would require root. You must use volumes for output artifacts If you use runAsNonRoot - you cannot have output artifacts on base layer (e.g. /tmp ). You must use a volume (e.g. empty dir ).","title":"Workflow Pod Security Context"},{"location":"workflow-rbac/","text":"Workflow RBAC \u00b6 All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName , or if omitted, the default service account of the workflow's namespace. The amount of access which a workflow needs is dependent on what the workflow needs to do. For example, if your workflow needs to deploy a resource, then the workflow's service account will require 'create' privileges on that resource. Warning : We do not recommend using the default service account in production. It is a shared account so may have permissions added to it you do not want. Instead, create a service account only for your workflow. The minimum for the executor to function: For >= v3.4: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - argoproj.io resources : - workflowtaskresults verbs : - create - patch For <= v3.3 use. apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - \"\" resources : - pods verbs : - get - patch Warning: For many organizations, it may not be acceptable to give a workflow the pod patch permission, see #3961 If you are not using the emissary, you'll need additional permissions. See executor for suitable permissions.","title":"Workflow RBAC"},{"location":"workflow-rbac/#workflow-rbac","text":"All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName , or if omitted, the default service account of the workflow's namespace. The amount of access which a workflow needs is dependent on what the workflow needs to do. For example, if your workflow needs to deploy a resource, then the workflow's service account will require 'create' privileges on that resource. Warning : We do not recommend using the default service account in production. It is a shared account so may have permissions added to it you do not want. Instead, create a service account only for your workflow. The minimum for the executor to function: For >= v3.4: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - argoproj.io resources : - workflowtaskresults verbs : - create - patch For <= v3.3 use. apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - \"\" resources : - pods verbs : - get - patch Warning: For many organizations, it may not be acceptable to give a workflow the pod patch permission, see #3961 If you are not using the emissary, you'll need additional permissions. See executor for suitable permissions.","title":"Workflow RBAC"},{"location":"workflow-restrictions/","text":"Workflow Restrictions \u00b6 v2.9 and after Introduction \u00b6 As the administrator of the controller, you may want to limit which types of Workflows your users can run. Workflow Restrictions allow you to set requirements for all Workflows. Available Restrictions \u00b6 templateReferencing: Strict : Only process Workflows using workflowTemplateRef . You can use this to require usage of WorkflowTemplates, disallowing arbitrary Workflow execution. templateReferencing: Secure : Same as Strict plus enforce that a referenced WorkflowTemplate hasn't changed between operations. If a running Workflow's underlying WorkflowTemplate changes, the Workflow will error out. Setting Workflow Restrictions \u00b6 You can add workflowRestrictions in the workflow-controller-configmap . For example, to specify that Workflows may only run with workflowTemplateRef : # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : workflowRestrictions : | templateReferencing: Strict","title":"Workflow Restrictions"},{"location":"workflow-restrictions/#workflow-restrictions","text":"v2.9 and after","title":"Workflow Restrictions"},{"location":"workflow-restrictions/#introduction","text":"As the administrator of the controller, you may want to limit which types of Workflows your users can run. Workflow Restrictions allow you to set requirements for all Workflows.","title":"Introduction"},{"location":"workflow-restrictions/#available-restrictions","text":"templateReferencing: Strict : Only process Workflows using workflowTemplateRef . You can use this to require usage of WorkflowTemplates, disallowing arbitrary Workflow execution. templateReferencing: Secure : Same as Strict plus enforce that a referenced WorkflowTemplate hasn't changed between operations. If a running Workflow's underlying WorkflowTemplate changes, the Workflow will error out.","title":"Available Restrictions"},{"location":"workflow-restrictions/#setting-workflow-restrictions","text":"You can add workflowRestrictions in the workflow-controller-configmap . For example, to specify that Workflows may only run with workflowTemplateRef : # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : workflowRestrictions : | templateReferencing: Strict","title":"Setting Workflow Restrictions"},{"location":"workflow-submitting-workflow/","text":"One Workflow Submitting Another \u00b6 v2.8 and after If you want one workflow to create another, you can do this using curl . You'll need an access token . Typically the best way is to submit from a workflow template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : demo- spec : entrypoint : main templates : - name : main steps : - - name : a template : create-wf - name : create-wf script : image : curlimages/curl:latest command : - sh source : > curl https://argo-server:2746/api/v1/workflows/argo/submit \\ -fs \\ -H \"Authorization: Bearer eyJhbGci...\" \\ -d '{\"resourceKind\": \"WorkflowTemplate\", \"resourceName\": \"wait\", \"submitOptions\": {\"labels\": \"workflows.argoproj.io/workflow-template=wait\"}}'","title":"One Workflow Submitting Another"},{"location":"workflow-submitting-workflow/#one-workflow-submitting-another","text":"v2.8 and after If you want one workflow to create another, you can do this using curl . You'll need an access token . Typically the best way is to submit from a workflow template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : demo- spec : entrypoint : main templates : - name : main steps : - - name : a template : create-wf - name : create-wf script : image : curlimages/curl:latest command : - sh source : > curl https://argo-server:2746/api/v1/workflows/argo/submit \\ -fs \\ -H \"Authorization: Bearer eyJhbGci...\" \\ -d '{\"resourceKind\": \"WorkflowTemplate\", \"resourceName\": \"wait\", \"submitOptions\": {\"labels\": \"workflows.argoproj.io/workflow-template=wait\"}}'","title":"One Workflow Submitting Another"},{"location":"workflow-templates/","text":"Workflow Templates \u00b6 v2.4 and after Introduction \u00b6 WorkflowTemplates are definitions of Workflows that live in your cluster. This allows you to create a library of frequently-used templates and reuse them either by submitting them directly (v2.7 and after) or by referencing them from your Workflows . WorkflowTemplate vs template \u00b6 The terms WorkflowTemplate and template have created an unfortunate naming collision and have created some confusion in the past. However, a quick description should clarify each and their differences. A template (lower-case) is a task within a Workflow or (confusingly) a WorkflowTemplate under the field templates . Whenever you define a Workflow , you must define at least one (but usually more than one) template to run. This template can be of type container , script , dag , steps , resource , or suspend and can be referenced by an entrypoint or by other dag , and step templates. Here is an example of a Workflow with two templates : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello # We reference our first \"template\" here templates : - name : hello # The first \"template\" in this Workflow, it is referenced by \"entrypoint\" steps : # The type of this \"template\" is \"steps\" - - name : hello template : whalesay # We reference our second \"template\" here arguments : parameters : [{ name : message , value : \"hello1\" }] - name : whalesay # The second \"template\" in this Workflow, it is referenced by \"hello\" inputs : parameters : - name : message container : # The type of this \"template\" is \"container\" image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] A WorkflowTemplate is a definition of a Workflow that lives in your cluster. Since it is a definition of a Workflow it also contains templates . These templates can be referenced from within the WorkflowTemplate and from other Workflows and WorkflowTemplates on your cluster. To see how, please see Referencing Other WorkflowTemplates . WorkflowTemplate Spec \u00b6 v2.7 and after In v2.7 and after, all the fields in WorkflowSpec (except for priority that must be configured in a WorkflowSpec itself) are supported for WorkflowTemplates . You can take any existing Workflow you may have and convert it to a WorkflowTemplate by substituting kind: Workflow to kind: WorkflowTemplate . v2.4 \u2013 2.6 WorkflowTemplates in v2.4 - v2.6 are only partial Workflow definitions and only support the templates and arguments field. This would not be a valid WorkflowTemplate in v2.4 - v2.6 (notice entrypoint field): apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template # Fields other than \"arguments\" and \"templates\" not supported in v2.4 - v2.6 arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] However, this would be a valid WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Adding labels/annotations to Workflows with workflowMetadata \u00b6 2.10.2 and after To automatically add labels and/or annotations to Workflows created from WorkflowTemplates , use workflowMetadata . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : workflowMetadata : labels : example-label : example-value Working with parameters \u00b6 When working with parameters in a WorkflowTemplate , please note the following: When working with global parameters, you can instantiate your global variables in your Workflow and then directly reference them in your WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-global-arg spec : serviceAccountName : argo templates : - name : hello-world container : image : docker/whalesay command : [ cowsay ] args : [ \"{{workflow.parameters.global-parameter}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-wf-global-arg- spec : serviceAccountName : argo entrypoint : whalesay arguments : parameters : - name : global-parameter value : hello templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-global-arg template : hello-world When working with local parameters, the values of local parameters must be supplied at the template definition inside the WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-local-arg spec : templates : - name : hello-world inputs : parameters : - name : msg value : \"hello world\" container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.msg}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-local-arg- spec : entrypoint : whalesay templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-local-arg template : hello-world Referencing other WorkflowTemplates \u00b6 You can reference templates from another WorkflowTemplates (see the difference between the two ) using a templateRef field. Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example from a steps template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate\" using this field name : workflow-template-1 # This is the name of the \"WorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" You can also do so similarly with a dag template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay dag : tasks : - name : call-whalesay-template templateRef : name : workflow-template-1 template : whalesay-template arguments : parameters : - name : message value : \"hello world\" You should never reference another template directly on a template object (outside of a steps or dag template). This includes both using template and templateRef . This behavior is deprecated, no longer supported, and will be removed in a future version. Here is an example of a deprecated reference that should not be used : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay template : # You should NEVER use \"template\" here. Use it under a \"steps\" or \"dag\" template (see above). templateRef : # You should NEVER use \"templateRef\" here. Use it under a \"steps\" or \"dag\" template (see above). name : workflow-template-1 template : whalesay-template arguments : # Arguments here are ignored. Use them under a \"steps\" or \"dag\" template (see above). parameters : - name : message value : \"hello world\" The reasoning for deprecating this behavior is that a template is a \"definition\": it defines inputs and things to be done once instantiated. With this deprecated behavior, the same template object is allowed to be an \"instantiator\": to pass in \"live\" arguments and reference other templates (those other templates may be \"definitions\" or \"instantiators\"). This behavior has been problematic and dangerous. It causes confusion and has design inconsistencies. 2.9 and after Create Workflow from WorkflowTemplate Spec \u00b6 You can create Workflow from WorkflowTemplate spec using workflowTemplateRef . If you pass the arguments to created Workflow , it will be merged with workflow template arguments. Here is an example for referring WorkflowTemplate as Workflow with passing entrypoint and Workflow Arguments to WorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : workflow-template-submittable Here is an example of a referring WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : workflowTemplateRef : name : workflow-template-submittable Managing WorkflowTemplates \u00b6 CLI \u00b6 You can create some example templates as follows: argo template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/templates.yaml Then submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/hello-world.yaml 2.7 and after Then submit a WorkflowTemplate as a Workflow : argo submit --from workflowtemplate/workflow-template-submittable If you need to submit a WorkflowTemplate as a Workflow with parameters: argo submit --from workflowtemplate/workflow-template-submittable -p message = value1 kubectl \u00b6 Using kubectl apply -f and kubectl get wftmpl GitOps via Argo CD \u00b6 WorkflowTemplate resources can be managed with GitOps by using Argo CD UI \u00b6 WorkflowTemplate resources can also be managed by the UI Users can specify options under enum to enable drop-down list selection when submitting WorkflowTemplate s from the UI. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-with-enum-values spec : entrypoint : argosay arguments : parameters : - name : message value : one enum : - one - two - three templates : - name : argosay inputs : parameters : - name : message value : '{{workflow.parameters.message}}' container : name : main image : 'argoproj/argosay:v2' command : - /argosay args : - echo - '{{inputs.parameters.message}}'","title":"Workflow Templates"},{"location":"workflow-templates/#workflow-templates","text":"v2.4 and after","title":"Workflow Templates"},{"location":"workflow-templates/#introduction","text":"WorkflowTemplates are definitions of Workflows that live in your cluster. This allows you to create a library of frequently-used templates and reuse them either by submitting them directly (v2.7 and after) or by referencing them from your Workflows .","title":"Introduction"},{"location":"workflow-templates/#workflowtemplate-vs-template","text":"The terms WorkflowTemplate and template have created an unfortunate naming collision and have created some confusion in the past. However, a quick description should clarify each and their differences. A template (lower-case) is a task within a Workflow or (confusingly) a WorkflowTemplate under the field templates . Whenever you define a Workflow , you must define at least one (but usually more than one) template to run. This template can be of type container , script , dag , steps , resource , or suspend and can be referenced by an entrypoint or by other dag , and step templates. Here is an example of a Workflow with two templates : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello # We reference our first \"template\" here templates : - name : hello # The first \"template\" in this Workflow, it is referenced by \"entrypoint\" steps : # The type of this \"template\" is \"steps\" - - name : hello template : whalesay # We reference our second \"template\" here arguments : parameters : [{ name : message , value : \"hello1\" }] - name : whalesay # The second \"template\" in this Workflow, it is referenced by \"hello\" inputs : parameters : - name : message container : # The type of this \"template\" is \"container\" image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] A WorkflowTemplate is a definition of a Workflow that lives in your cluster. Since it is a definition of a Workflow it also contains templates . These templates can be referenced from within the WorkflowTemplate and from other Workflows and WorkflowTemplates on your cluster. To see how, please see Referencing Other WorkflowTemplates .","title":"WorkflowTemplate vs template"},{"location":"workflow-templates/#workflowtemplate-spec","text":"v2.7 and after In v2.7 and after, all the fields in WorkflowSpec (except for priority that must be configured in a WorkflowSpec itself) are supported for WorkflowTemplates . You can take any existing Workflow you may have and convert it to a WorkflowTemplate by substituting kind: Workflow to kind: WorkflowTemplate . v2.4 \u2013 2.6 WorkflowTemplates in v2.4 - v2.6 are only partial Workflow definitions and only support the templates and arguments field. This would not be a valid WorkflowTemplate in v2.4 - v2.6 (notice entrypoint field): apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template # Fields other than \"arguments\" and \"templates\" not supported in v2.4 - v2.6 arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] However, this would be a valid WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ]","title":"WorkflowTemplate Spec"},{"location":"workflow-templates/#adding-labelsannotations-to-workflows-with-workflowmetadata","text":"2.10.2 and after To automatically add labels and/or annotations to Workflows created from WorkflowTemplates , use workflowMetadata . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : workflowMetadata : labels : example-label : example-value","title":"Adding labels/annotations to Workflows with workflowMetadata"},{"location":"workflow-templates/#working-with-parameters","text":"When working with parameters in a WorkflowTemplate , please note the following: When working with global parameters, you can instantiate your global variables in your Workflow and then directly reference them in your WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-global-arg spec : serviceAccountName : argo templates : - name : hello-world container : image : docker/whalesay command : [ cowsay ] args : [ \"{{workflow.parameters.global-parameter}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-wf-global-arg- spec : serviceAccountName : argo entrypoint : whalesay arguments : parameters : - name : global-parameter value : hello templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-global-arg template : hello-world When working with local parameters, the values of local parameters must be supplied at the template definition inside the WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-local-arg spec : templates : - name : hello-world inputs : parameters : - name : msg value : \"hello world\" container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.msg}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-local-arg- spec : entrypoint : whalesay templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-local-arg template : hello-world","title":"Working with parameters"},{"location":"workflow-templates/#referencing-other-workflowtemplates","text":"You can reference templates from another WorkflowTemplates (see the difference between the two ) using a templateRef field. Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example from a steps template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate\" using this field name : workflow-template-1 # This is the name of the \"WorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" You can also do so similarly with a dag template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay dag : tasks : - name : call-whalesay-template templateRef : name : workflow-template-1 template : whalesay-template arguments : parameters : - name : message value : \"hello world\" You should never reference another template directly on a template object (outside of a steps or dag template). This includes both using template and templateRef . This behavior is deprecated, no longer supported, and will be removed in a future version. Here is an example of a deprecated reference that should not be used : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay template : # You should NEVER use \"template\" here. Use it under a \"steps\" or \"dag\" template (see above). templateRef : # You should NEVER use \"templateRef\" here. Use it under a \"steps\" or \"dag\" template (see above). name : workflow-template-1 template : whalesay-template arguments : # Arguments here are ignored. Use them under a \"steps\" or \"dag\" template (see above). parameters : - name : message value : \"hello world\" The reasoning for deprecating this behavior is that a template is a \"definition\": it defines inputs and things to be done once instantiated. With this deprecated behavior, the same template object is allowed to be an \"instantiator\": to pass in \"live\" arguments and reference other templates (those other templates may be \"definitions\" or \"instantiators\"). This behavior has been problematic and dangerous. It causes confusion and has design inconsistencies. 2.9 and after","title":"Referencing other WorkflowTemplates"},{"location":"workflow-templates/#create-workflow-from-workflowtemplate-spec","text":"You can create Workflow from WorkflowTemplate spec using workflowTemplateRef . If you pass the arguments to created Workflow , it will be merged with workflow template arguments. Here is an example for referring WorkflowTemplate as Workflow with passing entrypoint and Workflow Arguments to WorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : workflow-template-submittable Here is an example of a referring WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : workflowTemplateRef : name : workflow-template-submittable","title":"Create Workflow from WorkflowTemplate Spec"},{"location":"workflow-templates/#managing-workflowtemplates","text":"","title":"Managing WorkflowTemplates"},{"location":"workflow-templates/#cli","text":"You can create some example templates as follows: argo template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/templates.yaml Then submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/hello-world.yaml 2.7 and after Then submit a WorkflowTemplate as a Workflow : argo submit --from workflowtemplate/workflow-template-submittable If you need to submit a WorkflowTemplate as a Workflow with parameters: argo submit --from workflowtemplate/workflow-template-submittable -p message = value1","title":"CLI"},{"location":"workflow-templates/#kubectl","text":"Using kubectl apply -f and kubectl get wftmpl","title":"kubectl"},{"location":"workflow-templates/#gitops-via-argo-cd","text":"WorkflowTemplate resources can be managed with GitOps by using Argo CD","title":"GitOps via Argo CD"},{"location":"workflow-templates/#ui","text":"WorkflowTemplate resources can also be managed by the UI Users can specify options under enum to enable drop-down list selection when submitting WorkflowTemplate s from the UI. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-with-enum-values spec : entrypoint : argosay arguments : parameters : - name : message value : one enum : - one - two - three templates : - name : argosay inputs : parameters : - name : message value : '{{workflow.parameters.message}}' container : name : main image : 'argoproj/argosay:v2' command : - /argosay args : - echo - '{{inputs.parameters.message}}'","title":"UI"},{"location":"cli/argo/","text":"argo \u00b6 argo is the command line interface to Argo Synopsis \u00b6 You can use the CLI in the following modes: Kubernetes API Mode (default) \u00b6 Requests are sent directly to the Kubernetes API. No Argo Server is needed. Large workflows and the workflow archive are not supported. Use when you have direct access to the Kubernetes API, and don't need large workflow or workflow archive support. If you're using instance ID (which is very unlikely), you'll need to set it: ARGO_INSTANCEID=your-instanceid Argo Server GRPC Mode \u00b6 Requests are sent to the Argo Server API via GRPC (using HTTP/2). Large workflows and the workflow archive are supported. Network load-balancers that do not support HTTP/2 are not supported. Use if you do not have access to the Kubernetes API (e.g. you're in another cluster), and you're running the Argo Server using a network load-balancer that support HTTP/2. To enable, set ARGO_SERVER: ARGO_SERVER = localhost : 2746 ; # The format is \"host:port\" - do not prefix with \"http\" or \"https\" If you're have transport-layer security (TLS) enabled (i.e. you are running \"argo server --secure\" and therefore has HTTPS): ARGO_SECURE=true If your server is running with self-signed certificates. Do not use in production: ARGO_INSECURE_SKIP_VERIFY=true By default, the CLI uses your KUBECONFIG to determine default for ARGO_TOKEN and ARGO_NAMESPACE. You probably error with \"no configuration has been provided\". To prevent it: KUBECONFIG=/dev/null You will then need to set: ARGO_NAMESPACE=argo And: ARGO_TOKEN='Bearer ******' ;# Should always start with \"Bearer \" or \"Basic \". Argo Server HTTP1 Mode \u00b6 As per GRPC mode, but uses HTTP. Can be used with ALB that does not support HTTP/2. The command \"argo logs --since-time=2020....\" will not work (due to time-type). Use this when your network load-balancer does not support HTTP/2. Use the same configuration as GRPC mode, but also set: ARGO_HTTP1=true If your server is behind an ingress with a path (you'll be running \"argo server --basehref /...) or \"BASE_HREF=/... argo server\"): ARGO_BASE_HREF=/argo argo [flags] Options \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. -h, --help help for argo --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive argo auth - manage authentication settings argo cluster-template - manipulate cluster workflow templates argo completion - output shell completion code for the specified shell (bash or zsh) argo cp - copy artifacts from workflow argo cron - manage cron workflows argo delete - delete workflows argo executor-plugin - manage executor plugins argo get - display details about a workflow argo lint - validate files or directories of manifests argo list - list workflows argo logs - view logs of a pod or workflow argo node - perform action on a node in a workflow argo resubmit - resubmit one or more workflows argo resume - resume zero or more workflows (opposite of suspend) argo retry - retry zero or more workflows argo server - start the Argo Server argo stop - stop zero or more workflows allowing all exit handlers to run argo submit - submit a workflow argo suspend - suspend zero or more workflows (opposite of resume) argo template - manipulate workflow templates argo terminate - terminate zero or more workflows immediately argo version - print version information argo wait - waits for workflows to complete argo watch - watch a workflow until it completes","title":"argo"},{"location":"cli/argo/#argo","text":"argo is the command line interface to Argo","title":"argo"},{"location":"cli/argo/#synopsis","text":"You can use the CLI in the following modes:","title":"Synopsis"},{"location":"cli/argo/#kubernetes-api-mode-default","text":"Requests are sent directly to the Kubernetes API. No Argo Server is needed. Large workflows and the workflow archive are not supported. Use when you have direct access to the Kubernetes API, and don't need large workflow or workflow archive support. If you're using instance ID (which is very unlikely), you'll need to set it: ARGO_INSTANCEID=your-instanceid","title":"Kubernetes API Mode (default)"},{"location":"cli/argo/#argo-server-grpc-mode","text":"Requests are sent to the Argo Server API via GRPC (using HTTP/2). Large workflows and the workflow archive are supported. Network load-balancers that do not support HTTP/2 are not supported. Use if you do not have access to the Kubernetes API (e.g. you're in another cluster), and you're running the Argo Server using a network load-balancer that support HTTP/2. To enable, set ARGO_SERVER: ARGO_SERVER = localhost : 2746 ; # The format is \"host:port\" - do not prefix with \"http\" or \"https\" If you're have transport-layer security (TLS) enabled (i.e. you are running \"argo server --secure\" and therefore has HTTPS): ARGO_SECURE=true If your server is running with self-signed certificates. Do not use in production: ARGO_INSECURE_SKIP_VERIFY=true By default, the CLI uses your KUBECONFIG to determine default for ARGO_TOKEN and ARGO_NAMESPACE. You probably error with \"no configuration has been provided\". To prevent it: KUBECONFIG=/dev/null You will then need to set: ARGO_NAMESPACE=argo And: ARGO_TOKEN='Bearer ******' ;# Should always start with \"Bearer \" or \"Basic \".","title":"Argo Server GRPC Mode"},{"location":"cli/argo/#argo-server-http1-mode","text":"As per GRPC mode, but uses HTTP. Can be used with ALB that does not support HTTP/2. The command \"argo logs --since-time=2020....\" will not work (due to time-type). Use this when your network load-balancer does not support HTTP/2. Use the same configuration as GRPC mode, but also set: ARGO_HTTP1=true If your server is behind an ingress with a path (you'll be running \"argo server --basehref /...) or \"BASE_HREF=/... argo server\"): ARGO_BASE_HREF=/argo argo [flags]","title":"Argo Server HTTP1 Mode"},{"location":"cli/argo/#options","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. -h, --help help for argo --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options"},{"location":"cli/argo/#see-also","text":"argo archive - manage the workflow archive argo auth - manage authentication settings argo cluster-template - manipulate cluster workflow templates argo completion - output shell completion code for the specified shell (bash or zsh) argo cp - copy artifacts from workflow argo cron - manage cron workflows argo delete - delete workflows argo executor-plugin - manage executor plugins argo get - display details about a workflow argo lint - validate files or directories of manifests argo list - list workflows argo logs - view logs of a pod or workflow argo node - perform action on a node in a workflow argo resubmit - resubmit one or more workflows argo resume - resume zero or more workflows (opposite of suspend) argo retry - retry zero or more workflows argo server - start the Argo Server argo stop - stop zero or more workflows allowing all exit handlers to run argo submit - submit a workflow argo suspend - suspend zero or more workflows (opposite of resume) argo template - manipulate workflow templates argo terminate - terminate zero or more workflows immediately argo version - print version information argo wait - waits for workflows to complete argo watch - watch a workflow until it completes","title":"SEE ALSO"},{"location":"cli/argo_archive/","text":"argo archive \u00b6 manage the workflow archive argo archive [flags] Options \u00b6 -h, --help help for archive Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo archive delete - delete a workflow in the archive argo archive get - get a workflow in the archive argo archive list - list workflows in the archive argo archive list-label-keys - list workflows label keys in the archive argo archive list-label-values - get workflow label values in the archive argo archive resubmit - resubmit one or more workflows argo archive retry - retry zero or more workflows","title":"argo archive"},{"location":"cli/argo_archive/#argo-archive","text":"manage the workflow archive argo archive [flags]","title":"argo archive"},{"location":"cli/argo_archive/#options","text":"-h, --help help for archive","title":"Options"},{"location":"cli/argo_archive/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive/#see-also","text":"argo - argo is the command line interface to Argo argo archive delete - delete a workflow in the archive argo archive get - get a workflow in the archive argo archive list - list workflows in the archive argo archive list-label-keys - list workflows label keys in the archive argo archive list-label-values - get workflow label values in the archive argo archive resubmit - resubmit one or more workflows argo archive retry - retry zero or more workflows","title":"SEE ALSO"},{"location":"cli/argo_archive_delete/","text":"argo archive delete \u00b6 delete a workflow in the archive argo archive delete UID... [flags] Options \u00b6 -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive delete"},{"location":"cli/argo_archive_delete/#argo-archive-delete","text":"delete a workflow in the archive argo archive delete UID... [flags]","title":"argo archive delete"},{"location":"cli/argo_archive_delete/#options","text":"-h, --help help for delete","title":"Options"},{"location":"cli/argo_archive_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_delete/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_get/","text":"argo archive get \u00b6 get a workflow in the archive argo archive get UID [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide (default \"wide\") Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive get"},{"location":"cli/argo_archive_get/#argo-archive-get","text":"get a workflow in the archive argo archive get UID [flags]","title":"argo archive get"},{"location":"cli/argo_archive_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide (default \"wide\")","title":"Options"},{"location":"cli/argo_archive_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_get/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_list-label-keys/","text":"argo archive list-label-keys \u00b6 list workflows label keys in the archive argo archive list-label-keys [flags] Options \u00b6 -h, --help help for list-label-keys Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive list-label-keys"},{"location":"cli/argo_archive_list-label-keys/#argo-archive-list-label-keys","text":"list workflows label keys in the archive argo archive list-label-keys [flags]","title":"argo archive list-label-keys"},{"location":"cli/argo_archive_list-label-keys/#options","text":"-h, --help help for list-label-keys","title":"Options"},{"location":"cli/argo_archive_list-label-keys/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_list-label-keys/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_list-label-values/","text":"argo archive list-label-values \u00b6 get workflow label values in the archive argo archive list-label-values [flags] Options \u00b6 -h, --help help for list-label-values -l, --selector string Selector (label query) to query on, allows 1 value (e.g. -l key1) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive list-label-values"},{"location":"cli/argo_archive_list-label-values/#argo-archive-list-label-values","text":"get workflow label values in the archive argo archive list-label-values [flags]","title":"argo archive list-label-values"},{"location":"cli/argo_archive_list-label-values/#options","text":"-h, --help help for list-label-values -l, --selector string Selector (label query) to query on, allows 1 value (e.g. -l key1)","title":"Options"},{"location":"cli/argo_archive_list-label-values/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_list-label-values/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_list/","text":"argo archive list \u00b6 list workflows in the archive argo archive list [flags] Options \u00b6 --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. -h, --help help for list -o, --output string Output format. One of: json|yaml|wide (default \"wide\") -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive list"},{"location":"cli/argo_archive_list/#argo-archive-list","text":"list workflows in the archive argo archive list [flags]","title":"argo archive list"},{"location":"cli/argo_archive_list/#options","text":"--chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. -h, --help help for list -o, --output string Output format. One of: json|yaml|wide (default \"wide\") -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)","title":"Options"},{"location":"cli/argo_archive_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_list/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_resubmit/","text":"argo archive resubmit \u00b6 resubmit one or more workflows argo archive resubmit [WORKFLOW...] [flags] Examples \u00b6 # Resubmit a workflow: argo archive resubmit uid # Resubmit multiple workflows: argo archive resubmit uid another-uid # Resubmit multiple workflows by label selector: argo archive resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo archive resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo archive resubmit --wait uid # Resubmit and watch until completion: argo archive resubmit --watch uid # Resubmit and tail logs until completion: argo archive resubmit --log uid Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive resubmit"},{"location":"cli/argo_archive_resubmit/#argo-archive-resubmit","text":"resubmit one or more workflows argo archive resubmit [WORKFLOW...] [flags]","title":"argo archive resubmit"},{"location":"cli/argo_archive_resubmit/#examples","text":"# Resubmit a workflow: argo archive resubmit uid # Resubmit multiple workflows: argo archive resubmit uid another-uid # Resubmit multiple workflows by label selector: argo archive resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo archive resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo archive resubmit --wait uid # Resubmit and watch until completion: argo archive resubmit --watch uid # Resubmit and tail logs until completion: argo archive resubmit --log uid","title":"Examples"},{"location":"cli/argo_archive_resubmit/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted","title":"Options"},{"location":"cli/argo_archive_resubmit/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_resubmit/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_retry/","text":"argo archive retry \u00b6 retry zero or more workflows argo archive retry [WORKFLOW...] [flags] Examples \u00b6 # Retry a workflow: argo archive retry uid # Retry multiple workflows: argo archive retry uid another-uid # Retry multiple workflows by label selector: argo archive retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo archive retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo archive retry --wait uid # Retry and watch until completion: argo archive retry --watch uid # Retry and tail logs until completion: argo archive retry --log uid Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive retry"},{"location":"cli/argo_archive_retry/#argo-archive-retry","text":"retry zero or more workflows argo archive retry [WORKFLOW...] [flags]","title":"argo archive retry"},{"location":"cli/argo_archive_retry/#examples","text":"# Retry a workflow: argo archive retry uid # Retry multiple workflows: argo archive retry uid another-uid # Retry multiple workflows by label selector: argo archive retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo archive retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo archive retry --wait uid # Retry and watch until completion: argo archive retry --watch uid # Retry and tail logs until completion: argo archive retry --log uid","title":"Examples"},{"location":"cli/argo_archive_retry/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried","title":"Options"},{"location":"cli/argo_archive_retry/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_retry/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_auth/","text":"argo auth \u00b6 manage authentication settings argo auth [flags] Options \u00b6 -h, --help help for auth Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo auth token - Print the auth token","title":"argo auth"},{"location":"cli/argo_auth/#argo-auth","text":"manage authentication settings argo auth [flags]","title":"argo auth"},{"location":"cli/argo_auth/#options","text":"-h, --help help for auth","title":"Options"},{"location":"cli/argo_auth/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_auth/#see-also","text":"argo - argo is the command line interface to Argo argo auth token - Print the auth token","title":"SEE ALSO"},{"location":"cli/argo_auth_token/","text":"argo auth token \u00b6 Print the auth token argo auth token [flags] Options \u00b6 -h, --help help for token Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo auth - manage authentication settings","title":"argo auth token"},{"location":"cli/argo_auth_token/#argo-auth-token","text":"Print the auth token argo auth token [flags]","title":"argo auth token"},{"location":"cli/argo_auth_token/#options","text":"-h, --help help for token","title":"Options"},{"location":"cli/argo_auth_token/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_auth_token/#see-also","text":"argo auth - manage authentication settings","title":"SEE ALSO"},{"location":"cli/argo_cluster-template/","text":"argo cluster-template \u00b6 manipulate cluster workflow templates argo cluster-template [flags] Options \u00b6 -h, --help help for cluster-template Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo cluster-template create - create a cluster workflow template argo cluster-template delete - delete a cluster workflow template argo cluster-template get - display details about a cluster workflow template argo cluster-template lint - validate files or directories of cluster workflow template manifests argo cluster-template list - list cluster workflow templates","title":"argo cluster-template"},{"location":"cli/argo_cluster-template/#argo-cluster-template","text":"manipulate cluster workflow templates argo cluster-template [flags]","title":"argo cluster-template"},{"location":"cli/argo_cluster-template/#options","text":"-h, --help help for cluster-template","title":"Options"},{"location":"cli/argo_cluster-template/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template/#see-also","text":"argo - argo is the command line interface to Argo argo cluster-template create - create a cluster workflow template argo cluster-template delete - delete a cluster workflow template argo cluster-template get - display details about a cluster workflow template argo cluster-template lint - validate files or directories of cluster workflow template manifests argo cluster-template list - list cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_create/","text":"argo cluster-template create \u00b6 create a cluster workflow template argo cluster-template create FILE1 FILE2... [flags] Examples \u00b6 # Create a Cluster Workflow Template: argo cluster-template create FILE1 # Create a Cluster Workflow Template and print it as YAML: argo cluster-template create FILE1 --output yaml # Create a Cluster Workflow Template with relaxed validation: argo cluster-template create FILE1 --strict false Options \u00b6 -h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template create"},{"location":"cli/argo_cluster-template_create/#argo-cluster-template-create","text":"create a cluster workflow template argo cluster-template create FILE1 FILE2... [flags]","title":"argo cluster-template create"},{"location":"cli/argo_cluster-template_create/#examples","text":"# Create a Cluster Workflow Template: argo cluster-template create FILE1 # Create a Cluster Workflow Template and print it as YAML: argo cluster-template create FILE1 --output yaml # Create a Cluster Workflow Template with relaxed validation: argo cluster-template create FILE1 --strict false","title":"Examples"},{"location":"cli/argo_cluster-template_create/#options","text":"-h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_cluster-template_create/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_create/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_delete/","text":"argo cluster-template delete \u00b6 delete a cluster workflow template argo cluster-template delete WORKFLOW_TEMPLATE [flags] Options \u00b6 --all Delete all cluster workflow templates -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template delete"},{"location":"cli/argo_cluster-template_delete/#argo-cluster-template-delete","text":"delete a cluster workflow template argo cluster-template delete WORKFLOW_TEMPLATE [flags]","title":"argo cluster-template delete"},{"location":"cli/argo_cluster-template_delete/#options","text":"--all Delete all cluster workflow templates -h, --help help for delete","title":"Options"},{"location":"cli/argo_cluster-template_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_delete/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_get/","text":"argo cluster-template get \u00b6 display details about a cluster workflow template argo cluster-template get CLUSTER WORKFLOW_TEMPLATE... [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template get"},{"location":"cli/argo_cluster-template_get/#argo-cluster-template-get","text":"display details about a cluster workflow template argo cluster-template get CLUSTER WORKFLOW_TEMPLATE... [flags]","title":"argo cluster-template get"},{"location":"cli/argo_cluster-template_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide","title":"Options"},{"location":"cli/argo_cluster-template_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_get/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_lint/","text":"argo cluster-template lint \u00b6 validate files or directories of cluster workflow template manifests argo cluster-template lint FILE... [flags] Options \u00b6 -h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template lint"},{"location":"cli/argo_cluster-template_lint/#argo-cluster-template-lint","text":"validate files or directories of cluster workflow template manifests argo cluster-template lint FILE... [flags]","title":"argo cluster-template lint"},{"location":"cli/argo_cluster-template_lint/#options","text":"-h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_cluster-template_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_lint/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_list/","text":"argo cluster-template list \u00b6 list cluster workflow templates argo cluster-template list [flags] Examples \u00b6 # List Cluster Workflow Templates: argo cluster-template list # List Cluster Workflow Templates with additional details such as labels, annotations, and status: argo cluster-template list --output wide # List Cluster Workflow Templates by name only: argo cluster-template list -o name Options \u00b6 -h, --help help for list -o, --output string Output format. One of: wide|name Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template list"},{"location":"cli/argo_cluster-template_list/#argo-cluster-template-list","text":"list cluster workflow templates argo cluster-template list [flags]","title":"argo cluster-template list"},{"location":"cli/argo_cluster-template_list/#examples","text":"# List Cluster Workflow Templates: argo cluster-template list # List Cluster Workflow Templates with additional details such as labels, annotations, and status: argo cluster-template list --output wide # List Cluster Workflow Templates by name only: argo cluster-template list -o name","title":"Examples"},{"location":"cli/argo_cluster-template_list/#options","text":"-h, --help help for list -o, --output string Output format. One of: wide|name","title":"Options"},{"location":"cli/argo_cluster-template_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_list/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_completion/","text":"argo completion \u00b6 output shell completion code for the specified shell (bash or zsh) Synopsis \u00b6 Write bash or zsh shell completion code to standard output. For bash, ensure you have bash completions installed and enabled. To access completions in your current shell, run $ source <(argo completion bash) Alternatively, write it to a file and source in .bash_profile For zsh, output to a file in a directory referenced by the $fpath shell variable. argo completion SHELL [flags] Options \u00b6 -h, --help help for completion Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo completion"},{"location":"cli/argo_completion/#argo-completion","text":"output shell completion code for the specified shell (bash or zsh)","title":"argo completion"},{"location":"cli/argo_completion/#synopsis","text":"Write bash or zsh shell completion code to standard output. For bash, ensure you have bash completions installed and enabled. To access completions in your current shell, run $ source <(argo completion bash) Alternatively, write it to a file and source in .bash_profile For zsh, output to a file in a directory referenced by the $fpath shell variable. argo completion SHELL [flags]","title":"Synopsis"},{"location":"cli/argo_completion/#options","text":"-h, --help help for completion","title":"Options"},{"location":"cli/argo_completion/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_completion/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_cp/","text":"argo cp \u00b6 copy artifacts from workflow argo cp my-wf output-directory ... [flags] Examples \u00b6 # Copy a workflow's artifacts to a local output directory: argo cp my-wf output-directory # Copy artifacts from a specific node in a workflow to a local output directory: argo cp my-wf output-directory --node-id=my-wf-node-id-123 Options \u00b6 --artifact-name string name of output artifact in workflow -h, --help help for cp -n, --namespace string namespace of workflow --node-id string id of node in workflow --path string use variables {workflowName}, {nodeId}, {templateName}, {artifactName}, and {namespace} to create a customized path to store the artifacts; example: {workflowName}/{templateName}/{artifactName} (default \"{namespace}/{workflowName}/{nodeId}/outputs/{artifactName}\") --template-name string name of template in workflow Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo cp"},{"location":"cli/argo_cp/#argo-cp","text":"copy artifacts from workflow argo cp my-wf output-directory ... [flags]","title":"argo cp"},{"location":"cli/argo_cp/#examples","text":"# Copy a workflow's artifacts to a local output directory: argo cp my-wf output-directory # Copy artifacts from a specific node in a workflow to a local output directory: argo cp my-wf output-directory --node-id=my-wf-node-id-123","title":"Examples"},{"location":"cli/argo_cp/#options","text":"--artifact-name string name of output artifact in workflow -h, --help help for cp -n, --namespace string namespace of workflow --node-id string id of node in workflow --path string use variables {workflowName}, {nodeId}, {templateName}, {artifactName}, and {namespace} to create a customized path to store the artifacts; example: {workflowName}/{templateName}/{artifactName} (default \"{namespace}/{workflowName}/{nodeId}/outputs/{artifactName}\") --template-name string name of template in workflow","title":"Options"},{"location":"cli/argo_cp/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cp/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_cron/","text":"argo cron \u00b6 manage cron workflows Synopsis \u00b6 NextScheduledRun assumes that the workflow-controller uses UTC as its timezone argo cron [flags] Options \u00b6 -h, --help help for cron Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo cron create - create a cron workflow argo cron delete - delete a cron workflow argo cron get - display details about a cron workflow argo cron lint - validate files or directories of cron workflow manifests argo cron list - list cron workflows argo cron resume - resume zero or more cron workflows argo cron suspend - suspend zero or more cron workflows","title":"argo cron"},{"location":"cli/argo_cron/#argo-cron","text":"manage cron workflows","title":"argo cron"},{"location":"cli/argo_cron/#synopsis","text":"NextScheduledRun assumes that the workflow-controller uses UTC as its timezone argo cron [flags]","title":"Synopsis"},{"location":"cli/argo_cron/#options","text":"-h, --help help for cron","title":"Options"},{"location":"cli/argo_cron/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron/#see-also","text":"argo - argo is the command line interface to Argo argo cron create - create a cron workflow argo cron delete - delete a cron workflow argo cron get - display details about a cron workflow argo cron lint - validate files or directories of cron workflow manifests argo cron list - list cron workflows argo cron resume - resume zero or more cron workflows argo cron suspend - suspend zero or more cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_create/","text":"argo cron create \u00b6 create a cron workflow argo cron create FILE1 FILE2... [flags] Options \u00b6 --entrypoint string override entrypoint --generate-name string override metadata.generateName -h, --help help for create -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --name string override metadata.name -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --schedule string override cron workflow schedule --serviceaccount string run all pods in the workflow using specified serviceaccount --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron create"},{"location":"cli/argo_cron_create/#argo-cron-create","text":"create a cron workflow argo cron create FILE1 FILE2... [flags]","title":"argo cron create"},{"location":"cli/argo_cron_create/#options","text":"--entrypoint string override entrypoint --generate-name string override metadata.generateName -h, --help help for create -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --name string override metadata.name -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --schedule string override cron workflow schedule --serviceaccount string run all pods in the workflow using specified serviceaccount --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_cron_create/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_create/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_delete/","text":"argo cron delete \u00b6 delete a cron workflow argo cron delete [CRON_WORKFLOW... | --all] [flags] Options \u00b6 --all Delete all cron workflows -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron delete"},{"location":"cli/argo_cron_delete/#argo-cron-delete","text":"delete a cron workflow argo cron delete [CRON_WORKFLOW... | --all] [flags]","title":"argo cron delete"},{"location":"cli/argo_cron_delete/#options","text":"--all Delete all cron workflows -h, --help help for delete","title":"Options"},{"location":"cli/argo_cron_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_delete/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_get/","text":"argo cron get \u00b6 display details about a cron workflow argo cron get CRON_WORKFLOW... [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron get"},{"location":"cli/argo_cron_get/#argo-cron-get","text":"display details about a cron workflow argo cron get CRON_WORKFLOW... [flags]","title":"argo cron get"},{"location":"cli/argo_cron_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide","title":"Options"},{"location":"cli/argo_cron_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_get/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_lint/","text":"argo cron lint \u00b6 validate files or directories of cron workflow manifests argo cron lint FILE... [flags] Options \u00b6 -h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron lint"},{"location":"cli/argo_cron_lint/#argo-cron-lint","text":"validate files or directories of cron workflow manifests argo cron lint FILE... [flags]","title":"argo cron lint"},{"location":"cli/argo_cron_lint/#options","text":"-h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict validation (default true)","title":"Options"},{"location":"cli/argo_cron_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_lint/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_list/","text":"argo cron list \u00b6 list cron workflows argo cron list [flags] Options \u00b6 -A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron list"},{"location":"cli/argo_cron_list/#argo-cron-list","text":"list cron workflows argo cron list [flags]","title":"argo cron list"},{"location":"cli/argo_cron_list/#options","text":"-A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.","title":"Options"},{"location":"cli/argo_cron_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_list/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_resume/","text":"argo cron resume \u00b6 resume zero or more cron workflows argo cron resume [CRON_WORKFLOW...] [flags] Options \u00b6 -h, --help help for resume Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron resume"},{"location":"cli/argo_cron_resume/#argo-cron-resume","text":"resume zero or more cron workflows argo cron resume [CRON_WORKFLOW...] [flags]","title":"argo cron resume"},{"location":"cli/argo_cron_resume/#options","text":"-h, --help help for resume","title":"Options"},{"location":"cli/argo_cron_resume/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_resume/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_suspend/","text":"argo cron suspend \u00b6 suspend zero or more cron workflows argo cron suspend CRON_WORKFLOW... [flags] Options \u00b6 -h, --help help for suspend Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron suspend"},{"location":"cli/argo_cron_suspend/#argo-cron-suspend","text":"suspend zero or more cron workflows argo cron suspend CRON_WORKFLOW... [flags]","title":"argo cron suspend"},{"location":"cli/argo_cron_suspend/#options","text":"-h, --help help for suspend","title":"Options"},{"location":"cli/argo_cron_suspend/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_suspend/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_delete/","text":"argo delete \u00b6 delete workflows argo delete [--dry-run] [WORKFLOW...|[--all] [--older] [--completed] [--resubmitted] [--prefix PREFIX] [--selector SELECTOR] [--force] [--status STATUS] ] [flags] Examples \u00b6 # Delete a workflow: argo delete my-wf # Delete the latest workflow: argo delete @latest Options \u00b6 --all Delete all workflows -A, --all-namespaces Delete workflows from all namespaces --completed Delete completed workflows --dry-run Do not delete the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. --force Force delete workflows by removing finalizers -h, --help help for delete --older string Delete completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) --prefix string Delete workflows by prefix --query-chunk-size int Run the list query in chunks (deletes will still be executed individually) --resubmitted Delete resubmitted workflows -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --status strings Delete by status (comma separated) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo delete"},{"location":"cli/argo_delete/#argo-delete","text":"delete workflows argo delete [--dry-run] [WORKFLOW...|[--all] [--older] [--completed] [--resubmitted] [--prefix PREFIX] [--selector SELECTOR] [--force] [--status STATUS] ] [flags]","title":"argo delete"},{"location":"cli/argo_delete/#examples","text":"# Delete a workflow: argo delete my-wf # Delete the latest workflow: argo delete @latest","title":"Examples"},{"location":"cli/argo_delete/#options","text":"--all Delete all workflows -A, --all-namespaces Delete workflows from all namespaces --completed Delete completed workflows --dry-run Do not delete the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. --force Force delete workflows by removing finalizers -h, --help help for delete --older string Delete completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) --prefix string Delete workflows by prefix --query-chunk-size int Run the list query in chunks (deletes will still be executed individually) --resubmitted Delete resubmitted workflows -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --status strings Delete by status (comma separated)","title":"Options"},{"location":"cli/argo_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_delete/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_executor-plugin/","text":"argo executor-plugin \u00b6 manage executor plugins argo executor-plugin [flags] Options \u00b6 -h, --help help for executor-plugin Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo executor-plugin build - build an executor plugin","title":"argo executor-plugin"},{"location":"cli/argo_executor-plugin/#argo-executor-plugin","text":"manage executor plugins argo executor-plugin [flags]","title":"argo executor-plugin"},{"location":"cli/argo_executor-plugin/#options","text":"-h, --help help for executor-plugin","title":"Options"},{"location":"cli/argo_executor-plugin/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_executor-plugin/#see-also","text":"argo - argo is the command line interface to Argo argo executor-plugin build - build an executor plugin","title":"SEE ALSO"},{"location":"cli/argo_executor-plugin_build/","text":"argo executor-plugin build \u00b6 build an executor plugin argo executor-plugin build DIR [flags] Options \u00b6 -h, --help help for build Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo executor-plugin - manage executor plugins","title":"argo executor-plugin build"},{"location":"cli/argo_executor-plugin_build/#argo-executor-plugin-build","text":"build an executor plugin argo executor-plugin build DIR [flags]","title":"argo executor-plugin build"},{"location":"cli/argo_executor-plugin_build/#options","text":"-h, --help help for build","title":"Options"},{"location":"cli/argo_executor-plugin_build/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_executor-plugin_build/#see-also","text":"argo executor-plugin - manage executor plugins","title":"SEE ALSO"},{"location":"cli/argo_get/","text":"argo get \u00b6 display details about a workflow argo get WORKFLOW... [flags] Examples \u00b6 # Get information about a workflow: argo get my-wf # Get the latest workflow: argo get @latest Options \u00b6 -h, --help help for get --no-color Disable colorized output --no-utf8 Use plain 7-bits ascii characters --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: json|yaml|short|wide --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo get"},{"location":"cli/argo_get/#argo-get","text":"display details about a workflow argo get WORKFLOW... [flags]","title":"argo get"},{"location":"cli/argo_get/#examples","text":"# Get information about a workflow: argo get my-wf # Get the latest workflow: argo get @latest","title":"Examples"},{"location":"cli/argo_get/#options","text":"-h, --help help for get --no-color Disable colorized output --no-utf8 Use plain 7-bits ascii characters --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: json|yaml|short|wide --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error)","title":"Options"},{"location":"cli/argo_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_get/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_lint/","text":"argo lint \u00b6 validate files or directories of manifests argo lint FILE... [flags] Examples \u00b6 # Lint all manifests in a specified directory: argo lint ./manifests # Lint only manifests of Workflows and CronWorkflows from stdin: cat manifests.yaml | argo lint --kinds=workflows,cronworkflows - Options \u00b6 -h, --help help for lint --kinds strings Which kinds will be linted. Can be: workflows|workflowtemplates|cronworkflows|clusterworkflowtemplates (default [all]) --offline perform offline linting. For resources referencing other resources, the references will be resolved from the provided args -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict Perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo lint"},{"location":"cli/argo_lint/#argo-lint","text":"validate files or directories of manifests argo lint FILE... [flags]","title":"argo lint"},{"location":"cli/argo_lint/#examples","text":"# Lint all manifests in a specified directory: argo lint ./manifests # Lint only manifests of Workflows and CronWorkflows from stdin: cat manifests.yaml | argo lint --kinds=workflows,cronworkflows -","title":"Examples"},{"location":"cli/argo_lint/#options","text":"-h, --help help for lint --kinds strings Which kinds will be linted. Can be: workflows|workflowtemplates|cronworkflows|clusterworkflowtemplates (default [all]) --offline perform offline linting. For resources referencing other resources, the references will be resolved from the provided args -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict Perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_lint/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_list/","text":"argo list \u00b6 list workflows argo list [flags] Examples \u00b6 # List all workflows: argo list # List all workflows from all namespaces: argo list -A # List all running workflows: argo list --running # List all completed workflows: argo list --completed # List workflows created within the last 10m: argo list --since 10m # List workflows that finished more than 2h ago: argo list --older 2h # List workflows with more information (such as parameters): argo list -o wide # List workflows in YAML format: argo list -o yaml # List workflows that have both labels: argo list -l label1=value1,label2=value2 Options \u00b6 -A, --all-namespaces Show workflows from all namespaces --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. --completed Show completed workflows. Mutually exclusive with --running. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for list --no-headers Don't print headers (default print headers). --older string List completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) -o, --output string Output format. One of: name|wide|yaml|json --prefix string Filter workflows by prefix --resubmitted Show resubmitted workflows --running Show running workflows. Mutually exclusive with --completed. -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --since string Show only workflows created after than a relative duration --status strings Filter by status (comma separated) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo list"},{"location":"cli/argo_list/#argo-list","text":"list workflows argo list [flags]","title":"argo list"},{"location":"cli/argo_list/#examples","text":"# List all workflows: argo list # List all workflows from all namespaces: argo list -A # List all running workflows: argo list --running # List all completed workflows: argo list --completed # List workflows created within the last 10m: argo list --since 10m # List workflows that finished more than 2h ago: argo list --older 2h # List workflows with more information (such as parameters): argo list -o wide # List workflows in YAML format: argo list -o yaml # List workflows that have both labels: argo list -l label1=value1,label2=value2","title":"Examples"},{"location":"cli/argo_list/#options","text":"-A, --all-namespaces Show workflows from all namespaces --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. --completed Show completed workflows. Mutually exclusive with --running. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for list --no-headers Don't print headers (default print headers). --older string List completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) -o, --output string Output format. One of: name|wide|yaml|json --prefix string Filter workflows by prefix --resubmitted Show resubmitted workflows --running Show running workflows. Mutually exclusive with --completed. -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --since string Show only workflows created after than a relative duration --status strings Filter by status (comma separated)","title":"Options"},{"location":"cli/argo_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_list/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_logs/","text":"argo logs \u00b6 view logs of a pod or workflow argo logs WORKFLOW [POD] [flags] Examples \u00b6 # Print the logs of a workflow: argo logs my-wf # Follow the logs of a workflows: argo logs my-wf --follow # Print the logs of a workflows with a selector: argo logs my-wf -l app=sth # Print the logs of single container in a pod argo logs my-wf my-pod -c my-container # Print the logs of a workflow's pods: argo logs my-wf my-pod # Print the logs of a pods: argo logs --since=1h my-pod # Print the logs of the latest workflow: argo logs @latest Options \u00b6 -c, --container string Print the logs of this container (default \"main\") -f, --follow Specify if the logs should be streamed. --grep string grep for lines -h, --help help for logs --no-color Disable colorized output -p, --previous Specify if the previously terminated container logs should be returned. -l, --selector string log selector for some pod --since duration Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used. --since-time string Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used. --tail int If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime (default -1) --timestamps Include timestamps on each line in the log output Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo logs"},{"location":"cli/argo_logs/#argo-logs","text":"view logs of a pod or workflow argo logs WORKFLOW [POD] [flags]","title":"argo logs"},{"location":"cli/argo_logs/#examples","text":"# Print the logs of a workflow: argo logs my-wf # Follow the logs of a workflows: argo logs my-wf --follow # Print the logs of a workflows with a selector: argo logs my-wf -l app=sth # Print the logs of single container in a pod argo logs my-wf my-pod -c my-container # Print the logs of a workflow's pods: argo logs my-wf my-pod # Print the logs of a pods: argo logs --since=1h my-pod # Print the logs of the latest workflow: argo logs @latest","title":"Examples"},{"location":"cli/argo_logs/#options","text":"-c, --container string Print the logs of this container (default \"main\") -f, --follow Specify if the logs should be streamed. --grep string grep for lines -h, --help help for logs --no-color Disable colorized output -p, --previous Specify if the previously terminated container logs should be returned. -l, --selector string log selector for some pod --since duration Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used. --since-time string Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used. --tail int If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime (default -1) --timestamps Include timestamps on each line in the log output","title":"Options"},{"location":"cli/argo_logs/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_logs/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_node/","text":"argo node \u00b6 perform action on a node in a workflow argo node ACTION WORKFLOW FLAGS [flags] Examples \u00b6 # Set outputs to a node within a workflow: argo node set my-wf --output-parameter parameter-name=\"Hello, world!\" --node-field-selector displayName=approve # Set the message of a node within a workflow: argo node set my-wf --message \"We did it!\"\" --node-field-selector displayName=approve Options \u00b6 -h, --help help for node -m, --message string Set the message of a node, eg: --message \"Hello, world!\" --node-field-selector string Selector of node to set, eg: --node-field-selector inputs.paramaters.myparam.value=abc -p, --output-parameter stringArray Set a \"supplied\" output parameter of node, eg: --output-parameter parameter-name=\"Hello, world!\" --phase string Phase to set the node to, eg: --phase Succeeded Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo node"},{"location":"cli/argo_node/#argo-node","text":"perform action on a node in a workflow argo node ACTION WORKFLOW FLAGS [flags]","title":"argo node"},{"location":"cli/argo_node/#examples","text":"# Set outputs to a node within a workflow: argo node set my-wf --output-parameter parameter-name=\"Hello, world!\" --node-field-selector displayName=approve # Set the message of a node within a workflow: argo node set my-wf --message \"We did it!\"\" --node-field-selector displayName=approve","title":"Examples"},{"location":"cli/argo_node/#options","text":"-h, --help help for node -m, --message string Set the message of a node, eg: --message \"Hello, world!\" --node-field-selector string Selector of node to set, eg: --node-field-selector inputs.paramaters.myparam.value=abc -p, --output-parameter stringArray Set a \"supplied\" output parameter of node, eg: --output-parameter parameter-name=\"Hello, world!\" --phase string Phase to set the node to, eg: --phase Succeeded","title":"Options"},{"location":"cli/argo_node/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_node/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_resubmit/","text":"argo resubmit \u00b6 resubmit one or more workflows Synopsis \u00b6 Submit a completed workflow again. Optionally override parameters and memoize. Similar to running argo submit again with the same parameters. argo resubmit [WORKFLOW...] [flags] Examples \u00b6 # Resubmit a workflow: argo resubmit my-wf # Resubmit multiple workflows: argo resubmit my-wf my-other-wf my-third-wf # Resubmit multiple workflows by label selector: argo resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo resubmit --wait my-wf.yaml # Resubmit and watch until completion: argo resubmit --watch my-wf.yaml # Resubmit and tail logs until completion: argo resubmit --log my-wf.yaml # Resubmit the latest workflow: argo resubmit @latest Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo resubmit"},{"location":"cli/argo_resubmit/#argo-resubmit","text":"resubmit one or more workflows","title":"argo resubmit"},{"location":"cli/argo_resubmit/#synopsis","text":"Submit a completed workflow again. Optionally override parameters and memoize. Similar to running argo submit again with the same parameters. argo resubmit [WORKFLOW...] [flags]","title":"Synopsis"},{"location":"cli/argo_resubmit/#examples","text":"# Resubmit a workflow: argo resubmit my-wf # Resubmit multiple workflows: argo resubmit my-wf my-other-wf my-third-wf # Resubmit multiple workflows by label selector: argo resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo resubmit --wait my-wf.yaml # Resubmit and watch until completion: argo resubmit --watch my-wf.yaml # Resubmit and tail logs until completion: argo resubmit --log my-wf.yaml # Resubmit the latest workflow: argo resubmit @latest","title":"Examples"},{"location":"cli/argo_resubmit/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted","title":"Options"},{"location":"cli/argo_resubmit/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_resubmit/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_resume/","text":"argo resume \u00b6 resume zero or more workflows (opposite of suspend) argo resume WORKFLOW1 WORKFLOW2... [flags] Examples \u00b6 # Resume a workflow that has been suspended: argo resume my-wf # Resume multiple workflows: argo resume my-wf my-other-wf my-third-wf # Resume the latest workflow: argo resume @latest # Resume multiple workflows by node field selector: argo resume --node-field-selector inputs.paramaters.myparam.value=abc Options \u00b6 -h, --help help for resume --node-field-selector string selector of node to resume, eg: --node-field-selector inputs.paramaters.myparam.value=abc Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo resume"},{"location":"cli/argo_resume/#argo-resume","text":"resume zero or more workflows (opposite of suspend) argo resume WORKFLOW1 WORKFLOW2... [flags]","title":"argo resume"},{"location":"cli/argo_resume/#examples","text":"# Resume a workflow that has been suspended: argo resume my-wf # Resume multiple workflows: argo resume my-wf my-other-wf my-third-wf # Resume the latest workflow: argo resume @latest # Resume multiple workflows by node field selector: argo resume --node-field-selector inputs.paramaters.myparam.value=abc","title":"Examples"},{"location":"cli/argo_resume/#options","text":"-h, --help help for resume --node-field-selector string selector of node to resume, eg: --node-field-selector inputs.paramaters.myparam.value=abc","title":"Options"},{"location":"cli/argo_resume/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_resume/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_retry/","text":"argo retry \u00b6 retry zero or more workflows Synopsis \u00b6 Rerun a failed Workflow. Specifically, rerun all failed steps. The same Workflow object is used and no new Workflows are created. argo retry [WORKFLOW...] [flags] Examples \u00b6 # Retry a workflow: argo retry my-wf # Retry multiple workflows: argo retry my-wf my-other-wf my-third-wf # Retry multiple workflows by label selector: argo retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo retry --wait my-wf.yaml # Retry and watch until completion: argo retry --watch my-wf.yaml # Retry and tail logs until completion: argo retry --log my-wf.yaml # Retry the latest workflow: argo retry @latest # Restart node with id 5 on successful workflow, using node-field-selector argo retry my-wf --restart-successful --node-field-selector id=5 Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo retry"},{"location":"cli/argo_retry/#argo-retry","text":"retry zero or more workflows","title":"argo retry"},{"location":"cli/argo_retry/#synopsis","text":"Rerun a failed Workflow. Specifically, rerun all failed steps. The same Workflow object is used and no new Workflows are created. argo retry [WORKFLOW...] [flags]","title":"Synopsis"},{"location":"cli/argo_retry/#examples","text":"# Retry a workflow: argo retry my-wf # Retry multiple workflows: argo retry my-wf my-other-wf my-third-wf # Retry multiple workflows by label selector: argo retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo retry --wait my-wf.yaml # Retry and watch until completion: argo retry --watch my-wf.yaml # Retry and tail logs until completion: argo retry --log my-wf.yaml # Retry the latest workflow: argo retry @latest # Restart node with id 5 on successful workflow, using node-field-selector argo retry my-wf --restart-successful --node-field-selector id=5","title":"Examples"},{"location":"cli/argo_retry/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried","title":"Options"},{"location":"cli/argo_retry/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_retry/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_server/","text":"argo server \u00b6 start the Argo Server argo server [flags] Examples \u00b6 See https://argoproj.github.io/argo-workflows/argo-server/ Options \u00b6 --access-control-allow-origin string Set Access-Control-Allow-Origin header in HTTP responses. --allowed-link-protocol stringArray Allowed link protocol in configMap. Used if the allowed configMap links protocol are different from http,https. Defaults to the environment variable ALLOWED_LINK_PROTOCOL (default [http,https]) --api-rate-limit uint Set limit per IP for api ratelimiter (default 1000) --auth-mode stringArray API server authentication mode. Any 1 or more length permutation of: client,server,sso (default [client]) --basehref string Value for base href in index.html. Used if the server is running behind reverse proxy under subpath different from /. Defaults to the environment variable BASE_HREF. (default \"/\") -b, --browser enable automatic launching of the browser [local mode] --configmap string Name of K8s configmap to retrieve workflow controller configuration (default \"workflow-controller-configmap\") --event-async-dispatch dispatch event async --event-operation-queue-size int how many events operations that can be queued at once (default 16) --event-worker-count int how many event workers to run (default 4) -h, --help help for server --hsts Whether or not we should add a HTTP Secure Transport Security header. This only has effect if secure is enabled. (default true) --kube-api-burst int Burst to use while talking with kube-apiserver. (default 30) --kube-api-qps float32 QPS to use while talking with kube-apiserver. (default 20) --log-format string The formatter to use for logs. One of: text|json (default \"text\") --managed-namespace string namespace that watches, default to the installation namespace --namespaced run as namespaced mode -p, --port int Port to listen on (default 2746) -e, --secure Whether or not we should listen on TLS. (default true) --tls-certificate-secret-name string The name of a Kubernetes secret that contains the server certificates --x-frame-options string Set X-Frame-Options header in HTTP responses. (default \"DENY\") Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo server"},{"location":"cli/argo_server/#argo-server","text":"start the Argo Server argo server [flags]","title":"argo server"},{"location":"cli/argo_server/#examples","text":"See https://argoproj.github.io/argo-workflows/argo-server/","title":"Examples"},{"location":"cli/argo_server/#options","text":"--access-control-allow-origin string Set Access-Control-Allow-Origin header in HTTP responses. --allowed-link-protocol stringArray Allowed link protocol in configMap. Used if the allowed configMap links protocol are different from http,https. Defaults to the environment variable ALLOWED_LINK_PROTOCOL (default [http,https]) --api-rate-limit uint Set limit per IP for api ratelimiter (default 1000) --auth-mode stringArray API server authentication mode. Any 1 or more length permutation of: client,server,sso (default [client]) --basehref string Value for base href in index.html. Used if the server is running behind reverse proxy under subpath different from /. Defaults to the environment variable BASE_HREF. (default \"/\") -b, --browser enable automatic launching of the browser [local mode] --configmap string Name of K8s configmap to retrieve workflow controller configuration (default \"workflow-controller-configmap\") --event-async-dispatch dispatch event async --event-operation-queue-size int how many events operations that can be queued at once (default 16) --event-worker-count int how many event workers to run (default 4) -h, --help help for server --hsts Whether or not we should add a HTTP Secure Transport Security header. This only has effect if secure is enabled. (default true) --kube-api-burst int Burst to use while talking with kube-apiserver. (default 30) --kube-api-qps float32 QPS to use while talking with kube-apiserver. (default 20) --log-format string The formatter to use for logs. One of: text|json (default \"text\") --managed-namespace string namespace that watches, default to the installation namespace --namespaced run as namespaced mode -p, --port int Port to listen on (default 2746) -e, --secure Whether or not we should listen on TLS. (default true) --tls-certificate-secret-name string The name of a Kubernetes secret that contains the server certificates --x-frame-options string Set X-Frame-Options header in HTTP responses. (default \"DENY\")","title":"Options"},{"location":"cli/argo_server/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_server/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_stop/","text":"argo stop \u00b6 stop zero or more workflows allowing all exit handlers to run Synopsis \u00b6 Stop a workflow but still run exit handlers. argo stop WORKFLOW WORKFLOW2... [flags] Examples \u00b6 # Stop a workflow: argo stop my-wf # Stop the latest workflow: argo stop @latest # Stop multiple workflows by label selector argo stop -l workflows.argoproj.io/test=true # Stop multiple workflows by field selector argo stop --field-selector metadata.namespace=argo Options \u00b6 --dry-run If true, only print the workflows that would be stopped, without stopping them. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for stop --message string Message to add to previously running nodes --node-field-selector string selector of node to stop, eg: --node-field-selector inputs.paramaters.myparam.value=abc -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo stop"},{"location":"cli/argo_stop/#argo-stop","text":"stop zero or more workflows allowing all exit handlers to run","title":"argo stop"},{"location":"cli/argo_stop/#synopsis","text":"Stop a workflow but still run exit handlers. argo stop WORKFLOW WORKFLOW2... [flags]","title":"Synopsis"},{"location":"cli/argo_stop/#examples","text":"# Stop a workflow: argo stop my-wf # Stop the latest workflow: argo stop @latest # Stop multiple workflows by label selector argo stop -l workflows.argoproj.io/test=true # Stop multiple workflows by field selector argo stop --field-selector metadata.namespace=argo","title":"Examples"},{"location":"cli/argo_stop/#options","text":"--dry-run If true, only print the workflows that would be stopped, without stopping them. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for stop --message string Message to add to previously running nodes --node-field-selector string selector of node to stop, eg: --node-field-selector inputs.paramaters.myparam.value=abc -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)","title":"Options"},{"location":"cli/argo_stop/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_stop/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_submit/","text":"argo submit \u00b6 submit a workflow argo submit [FILE... | --from `kind/name] [flags] Examples \u00b6 # Submit multiple workflows from files: argo submit my-wf.yaml # Submit and wait for completion: argo submit --wait my-wf.yaml # Submit and watch until completion: argo submit --watch my-wf.yaml # Submit and tail logs until completion: argo submit --log my-wf.yaml # Submit a single workflow from an existing resource argo submit --from cronwf/my-cron-wf Options \u00b6 --dry-run modify the workflow on the client-side without creating it --entrypoint string override entrypoint --from kind/name Submit from an existing kind/name E.g., --from=cronwf/hello-world-cwf --generate-name string override metadata.generateName -h, --help help for submit -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --log log the workflow until it completes --name string override metadata.name --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --priority int32 workflow priority --scheduled-time string Override the workflow's scheduledTime parameter (useful for backfilling). The time must be RFC3339 --server-dry-run send request to server with dry-run flag which will modify the workflow without creating it --serviceaccount string run all pods in the workflow using specified serviceaccount --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error). Should only be used with --watch. --strict perform strict workflow validation (default true) -w, --wait wait for the workflow to complete --watch watch the workflow until it completes Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo submit"},{"location":"cli/argo_submit/#argo-submit","text":"submit a workflow argo submit [FILE... | --from `kind/name] [flags]","title":"argo submit"},{"location":"cli/argo_submit/#examples","text":"# Submit multiple workflows from files: argo submit my-wf.yaml # Submit and wait for completion: argo submit --wait my-wf.yaml # Submit and watch until completion: argo submit --watch my-wf.yaml # Submit and tail logs until completion: argo submit --log my-wf.yaml # Submit a single workflow from an existing resource argo submit --from cronwf/my-cron-wf","title":"Examples"},{"location":"cli/argo_submit/#options","text":"--dry-run modify the workflow on the client-side without creating it --entrypoint string override entrypoint --from kind/name Submit from an existing kind/name E.g., --from=cronwf/hello-world-cwf --generate-name string override metadata.generateName -h, --help help for submit -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --log log the workflow until it completes --name string override metadata.name --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --priority int32 workflow priority --scheduled-time string Override the workflow's scheduledTime parameter (useful for backfilling). The time must be RFC3339 --server-dry-run send request to server with dry-run flag which will modify the workflow without creating it --serviceaccount string run all pods in the workflow using specified serviceaccount --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error). Should only be used with --watch. --strict perform strict workflow validation (default true) -w, --wait wait for the workflow to complete --watch watch the workflow until it completes","title":"Options"},{"location":"cli/argo_submit/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_submit/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_suspend/","text":"argo suspend \u00b6 suspend zero or more workflows (opposite of resume) argo suspend WORKFLOW1 WORKFLOW2... [flags] Examples \u00b6 # Suspend a workflow: argo suspend my-wf # Suspend the latest workflow: argo suspend @latest Options \u00b6 -h, --help help for suspend Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo suspend"},{"location":"cli/argo_suspend/#argo-suspend","text":"suspend zero or more workflows (opposite of resume) argo suspend WORKFLOW1 WORKFLOW2... [flags]","title":"argo suspend"},{"location":"cli/argo_suspend/#examples","text":"# Suspend a workflow: argo suspend my-wf # Suspend the latest workflow: argo suspend @latest","title":"Examples"},{"location":"cli/argo_suspend/#options","text":"-h, --help help for suspend","title":"Options"},{"location":"cli/argo_suspend/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_suspend/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_template/","text":"argo template \u00b6 manipulate workflow templates argo template [flags] Options \u00b6 -h, --help help for template Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo template create - create a workflow template argo template delete - delete a workflow template argo template get - display details about a workflow template argo template lint - validate a file or directory of workflow template manifests argo template list - list workflow templates","title":"argo template"},{"location":"cli/argo_template/#argo-template","text":"manipulate workflow templates argo template [flags]","title":"argo template"},{"location":"cli/argo_template/#options","text":"-h, --help help for template","title":"Options"},{"location":"cli/argo_template/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template/#see-also","text":"argo - argo is the command line interface to Argo argo template create - create a workflow template argo template delete - delete a workflow template argo template get - display details about a workflow template argo template lint - validate a file or directory of workflow template manifests argo template list - list workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_create/","text":"argo template create \u00b6 create a workflow template argo template create FILE1 FILE2... [flags] Options \u00b6 -h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template create"},{"location":"cli/argo_template_create/#argo-template-create","text":"create a workflow template argo template create FILE1 FILE2... [flags]","title":"argo template create"},{"location":"cli/argo_template_create/#options","text":"-h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_template_create/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_create/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_delete/","text":"argo template delete \u00b6 delete a workflow template argo template delete WORKFLOW_TEMPLATE [flags] Options \u00b6 --all Delete all workflow templates -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template delete"},{"location":"cli/argo_template_delete/#argo-template-delete","text":"delete a workflow template argo template delete WORKFLOW_TEMPLATE [flags]","title":"argo template delete"},{"location":"cli/argo_template_delete/#options","text":"--all Delete all workflow templates -h, --help help for delete","title":"Options"},{"location":"cli/argo_template_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_delete/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_get/","text":"argo template get \u00b6 display details about a workflow template argo template get WORKFLOW_TEMPLATE... [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template get"},{"location":"cli/argo_template_get/#argo-template-get","text":"display details about a workflow template argo template get WORKFLOW_TEMPLATE... [flags]","title":"argo template get"},{"location":"cli/argo_template_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide","title":"Options"},{"location":"cli/argo_template_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_get/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_lint/","text":"argo template lint \u00b6 validate a file or directory of workflow template manifests argo template lint (DIRECTORY | FILE1 FILE2 FILE3...) [flags] Options \u00b6 -h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template lint"},{"location":"cli/argo_template_lint/#argo-template-lint","text":"validate a file or directory of workflow template manifests argo template lint (DIRECTORY | FILE1 FILE2 FILE3...) [flags]","title":"argo template lint"},{"location":"cli/argo_template_lint/#options","text":"-h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_template_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_lint/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_list/","text":"argo template list \u00b6 list workflow templates argo template list [flags] Options \u00b6 -A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template list"},{"location":"cli/argo_template_list/#argo-template-list","text":"list workflow templates argo template list [flags]","title":"argo template list"},{"location":"cli/argo_template_list/#options","text":"-A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name","title":"Options"},{"location":"cli/argo_template_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_list/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_terminate/","text":"argo terminate \u00b6 terminate zero or more workflows immediately Synopsis \u00b6 Immediately stop a workflow and do not run any exit handlers. argo terminate WORKFLOW WORKFLOW2... [flags] Examples \u00b6 # Terminate a workflow: argo terminate my-wf # Terminate the latest workflow: argo terminate @latest # Terminate multiple workflows by label selector argo terminate -l workflows.argoproj.io/test=true # Terminate multiple workflows by field selector argo terminate --field-selector metadata.namespace=argo Options \u00b6 --dry-run Do not terminate the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for terminate -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo terminate"},{"location":"cli/argo_terminate/#argo-terminate","text":"terminate zero or more workflows immediately","title":"argo terminate"},{"location":"cli/argo_terminate/#synopsis","text":"Immediately stop a workflow and do not run any exit handlers. argo terminate WORKFLOW WORKFLOW2... [flags]","title":"Synopsis"},{"location":"cli/argo_terminate/#examples","text":"# Terminate a workflow: argo terminate my-wf # Terminate the latest workflow: argo terminate @latest # Terminate multiple workflows by label selector argo terminate -l workflows.argoproj.io/test=true # Terminate multiple workflows by field selector argo terminate --field-selector metadata.namespace=argo","title":"Examples"},{"location":"cli/argo_terminate/#options","text":"--dry-run Do not terminate the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for terminate -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)","title":"Options"},{"location":"cli/argo_terminate/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_terminate/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_version/","text":"argo version \u00b6 print version information argo version [flags] Options \u00b6 -h, --help help for version --short print just the version number Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo version"},{"location":"cli/argo_version/#argo-version","text":"print version information argo version [flags]","title":"argo version"},{"location":"cli/argo_version/#options","text":"-h, --help help for version --short print just the version number","title":"Options"},{"location":"cli/argo_version/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_version/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_wait/","text":"argo wait \u00b6 waits for workflows to complete argo wait [WORKFLOW...] [flags] Examples \u00b6 # Wait on a workflow: argo wait my-wf # Wait on the latest workflow: argo wait @latest Options \u00b6 -h, --help help for wait --ignore-not-found Ignore the wait if the workflow is not found Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo wait"},{"location":"cli/argo_wait/#argo-wait","text":"waits for workflows to complete argo wait [WORKFLOW...] [flags]","title":"argo wait"},{"location":"cli/argo_wait/#examples","text":"# Wait on a workflow: argo wait my-wf # Wait on the latest workflow: argo wait @latest","title":"Examples"},{"location":"cli/argo_wait/#options","text":"-h, --help help for wait --ignore-not-found Ignore the wait if the workflow is not found","title":"Options"},{"location":"cli/argo_wait/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_wait/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_watch/","text":"argo watch \u00b6 watch a workflow until it completes argo watch WORKFLOW [flags] Examples \u00b6 # Watch a workflow: argo watch my-wf # Watch the latest workflow: argo watch @latest Options \u00b6 -h, --help help for watch --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo watch"},{"location":"cli/argo_watch/#argo-watch","text":"watch a workflow until it completes argo watch WORKFLOW [flags]","title":"argo watch"},{"location":"cli/argo_watch/#examples","text":"# Watch a workflow: argo watch my-wf # Watch the latest workflow: argo watch @latest","title":"Examples"},{"location":"cli/argo_watch/#options","text":"-h, --help help for watch --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error)","title":"Options"},{"location":"cli/argo_watch/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_watch/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"proposals/artifact-gc-proposal/","text":"Proposal for Artifact Garbage Collection \u00b6 Introduction \u00b6 The motivation for this is to enable users to automatically have certain Artifacts specified to be automatically garbage collected. Artifacts can be specified for Garbage Collection at different stages: OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never Proposal Specifics \u00b6 Workflow Spec changes \u00b6 WorkflowSpec has an ArtifactGC structure, which consists of an ArtifactGCStrategy , as well as the optional designation of a ServiceAccount and Pod metadata (labels and annotations) to be used by the Pod doing the deletion. The ArtifactGCStrategy can be set to OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never Artifact has an ArtifactGC section which can be used to override the Workflow level. Workflow Status changes \u00b6 Artifact has a boolean Deleted flag WorkflowStatus.Conditions can be set to ArtifactGCError WorkflowStatus can include a new field ArtGCStatus which holds additional information to keep track of the state of Artifact Garbage Collection. How it will work \u00b6 For each ArtifactGCStrategy the Controller will execute one Pod that runs in the user's namespace and deletes all artifacts pertaining to that strategy. Since OnWorkflowSuccess happens at the same time as OnWorkflowCompletion and OnWorkflowFailure also happens at the same time as OnWorkflowCompletion , we can consider consolidating these GC Strategies together. We will have a new CRD type called ArtifactGCTask and use one or more of them to specify the Artifacts which the GC Pod will read and then write Status to (note individual artifacts have individual statuses). The Controller will read the Status and reflect that in the Workflow Status. The Controller will deem the ArtifactGCTasks ready to read once the Pod has completed (in success or failure). Once the GC Pod has completed and the Workflow status has been persisted, assuming the Pod completed with Success, the Controller can delete the ArtifactGCTasks , which will cause the GC Pod to also get deleted as it will be \"owned\" by the ArtifactGCTasks . The Workflow will have a Finalizer on it to prevent it from being deleted until Artifact GC has occurred. Once all deletions for all GC Strategies have occurred, the Controller will remove the Finalizer. Failures \u00b6 If a deletion fails, the Pod will retry a few times through exponential back off. Note: it will not be considered a failure if the key does not exist - the principal of idempotence will allow this (i.e. if a Pod were to get evicted and then re-run it should be okay if some artifacts were previously deleted). Once it retries a few times, if it didn't succeed, it will end in a \"Failed\" state. The user will manually need to delete the ArtifactGCTasks (which will delete the GC Pod), and remove the Finalizer on the Workflow. The Failure will be reflected in both the Workflow Conditions as well as as a Kubernetes Event (and the Artifacts that failed will have \"Deleted\"=false). Alternatives Considered \u00b6 For reference, these slides were presented to the Argo Contributor meeting on 7/12/22 which go through some of the alternative options that were weighed. These alternatives are explained below: One Pod Per Artifact \u00b6 The POC that was done, which uses just one Pod to delete each Artifact, was considered as an alternative for MVP (Option 1 from the slides). This option has these benefits: simpler in that the Pod doesn't require any additional Object to report status (e.g. ArtifactGCTask ) because it simply succeeds or fails based on its exit code (whereas in Option 2 the Pod needs to report individual failure statuses for each artifact) could have a very minimal Service Account which provides access to just that one artifact's location and these drawbacks: deletion is slower when performed by multiple Pods a Workflow with thousands of artifacts causes thousands of Pods to get executed, which could overwhelm kube-scheduler and kube-apiserver. if we delay the Artifact GC Pods by giving them a lower priority than the Workflow Pods, users will not get their artifacts deleted when they expect and may log bugs Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing whether to use a separate Pod for every artifact or not, we decided not to, to achieve faster garbage collection and reduced load on K8S, accepting that we will require a new CRD type.\" Service Account/IAM roles \u00b6 We considered some alternatives for how to specify Service Account and/or Annotations, which are applied to give the GC Pod access (slide 12). We will have them specify this information in a new ArtifactGC section of the spec that lives on the Workflow level but can be overridden on the Artifact level (option 3 from slide). Another option considered was to just allow specification on the Workflow level (option 2 from slide) so as to reduce the complexity of the code and reduce the potential number of Pods running, but Option 3 was selected in the end to maximize flexibility. Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing the question of how users should specify Service Account and annotations, we decided to give them the option to specify them on the Workflow level and/or override them on the Artifact level, to maximize flexibility for user needs, accepting that the code will be more complicated, and sometimes there will be many Pods running.\" MVP vs post-MVP \u00b6 We will start with just S3. We can also make other determinations if it makes sense to postpone some parts for after MVP. Workflow Spec Validation \u00b6 We can reject the Workflow during validation if ArtifactGC is configured along with a non-supported storage engine (for now probably anything besides S3). Documentation \u00b6 Need to clarify certain things in our documentation: Users need to know that if they don't name their artifacts with unique keys, they risk the same key being deleted by one Workflow and created by another at the same time. One recommendation is to parametrize the key, e.g. {{workflow.uid}}/hello.txt . Requirement to specify Service Account or Annotation for ArtifactGC specifically if they are needed (we won't fall back to default Workflow SA/annotations). Also, the Service Account needs to either be bound to the \"agent\" role or otherwise allow the same access to ArtifactGCTasks .","title":"Proposal for Artifact Garbage Collection"},{"location":"proposals/artifact-gc-proposal/#proposal-for-artifact-garbage-collection","text":"","title":"Proposal for Artifact Garbage Collection"},{"location":"proposals/artifact-gc-proposal/#introduction","text":"The motivation for this is to enable users to automatically have certain Artifacts specified to be automatically garbage collected. Artifacts can be specified for Garbage Collection at different stages: OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never","title":"Introduction"},{"location":"proposals/artifact-gc-proposal/#proposal-specifics","text":"","title":"Proposal Specifics"},{"location":"proposals/artifact-gc-proposal/#workflow-spec-changes","text":"WorkflowSpec has an ArtifactGC structure, which consists of an ArtifactGCStrategy , as well as the optional designation of a ServiceAccount and Pod metadata (labels and annotations) to be used by the Pod doing the deletion. The ArtifactGCStrategy can be set to OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never Artifact has an ArtifactGC section which can be used to override the Workflow level.","title":"Workflow Spec changes"},{"location":"proposals/artifact-gc-proposal/#workflow-status-changes","text":"Artifact has a boolean Deleted flag WorkflowStatus.Conditions can be set to ArtifactGCError WorkflowStatus can include a new field ArtGCStatus which holds additional information to keep track of the state of Artifact Garbage Collection.","title":"Workflow Status changes"},{"location":"proposals/artifact-gc-proposal/#how-it-will-work","text":"For each ArtifactGCStrategy the Controller will execute one Pod that runs in the user's namespace and deletes all artifacts pertaining to that strategy. Since OnWorkflowSuccess happens at the same time as OnWorkflowCompletion and OnWorkflowFailure also happens at the same time as OnWorkflowCompletion , we can consider consolidating these GC Strategies together. We will have a new CRD type called ArtifactGCTask and use one or more of them to specify the Artifacts which the GC Pod will read and then write Status to (note individual artifacts have individual statuses). The Controller will read the Status and reflect that in the Workflow Status. The Controller will deem the ArtifactGCTasks ready to read once the Pod has completed (in success or failure). Once the GC Pod has completed and the Workflow status has been persisted, assuming the Pod completed with Success, the Controller can delete the ArtifactGCTasks , which will cause the GC Pod to also get deleted as it will be \"owned\" by the ArtifactGCTasks . The Workflow will have a Finalizer on it to prevent it from being deleted until Artifact GC has occurred. Once all deletions for all GC Strategies have occurred, the Controller will remove the Finalizer.","title":"How it will work"},{"location":"proposals/artifact-gc-proposal/#failures","text":"If a deletion fails, the Pod will retry a few times through exponential back off. Note: it will not be considered a failure if the key does not exist - the principal of idempotence will allow this (i.e. if a Pod were to get evicted and then re-run it should be okay if some artifacts were previously deleted). Once it retries a few times, if it didn't succeed, it will end in a \"Failed\" state. The user will manually need to delete the ArtifactGCTasks (which will delete the GC Pod), and remove the Finalizer on the Workflow. The Failure will be reflected in both the Workflow Conditions as well as as a Kubernetes Event (and the Artifacts that failed will have \"Deleted\"=false).","title":"Failures"},{"location":"proposals/artifact-gc-proposal/#alternatives-considered","text":"For reference, these slides were presented to the Argo Contributor meeting on 7/12/22 which go through some of the alternative options that were weighed. These alternatives are explained below:","title":"Alternatives Considered"},{"location":"proposals/artifact-gc-proposal/#one-pod-per-artifact","text":"The POC that was done, which uses just one Pod to delete each Artifact, was considered as an alternative for MVP (Option 1 from the slides). This option has these benefits: simpler in that the Pod doesn't require any additional Object to report status (e.g. ArtifactGCTask ) because it simply succeeds or fails based on its exit code (whereas in Option 2 the Pod needs to report individual failure statuses for each artifact) could have a very minimal Service Account which provides access to just that one artifact's location and these drawbacks: deletion is slower when performed by multiple Pods a Workflow with thousands of artifacts causes thousands of Pods to get executed, which could overwhelm kube-scheduler and kube-apiserver. if we delay the Artifact GC Pods by giving them a lower priority than the Workflow Pods, users will not get their artifacts deleted when they expect and may log bugs Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing whether to use a separate Pod for every artifact or not, we decided not to, to achieve faster garbage collection and reduced load on K8S, accepting that we will require a new CRD type.\"","title":"One Pod Per Artifact"},{"location":"proposals/artifact-gc-proposal/#service-accountiam-roles","text":"We considered some alternatives for how to specify Service Account and/or Annotations, which are applied to give the GC Pod access (slide 12). We will have them specify this information in a new ArtifactGC section of the spec that lives on the Workflow level but can be overridden on the Artifact level (option 3 from slide). Another option considered was to just allow specification on the Workflow level (option 2 from slide) so as to reduce the complexity of the code and reduce the potential number of Pods running, but Option 3 was selected in the end to maximize flexibility. Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing the question of how users should specify Service Account and annotations, we decided to give them the option to specify them on the Workflow level and/or override them on the Artifact level, to maximize flexibility for user needs, accepting that the code will be more complicated, and sometimes there will be many Pods running.\"","title":"Service Account/IAM roles"},{"location":"proposals/artifact-gc-proposal/#mvp-vs-post-mvp","text":"We will start with just S3. We can also make other determinations if it makes sense to postpone some parts for after MVP.","title":"MVP vs post-MVP"},{"location":"proposals/artifact-gc-proposal/#workflow-spec-validation","text":"We can reject the Workflow during validation if ArtifactGC is configured along with a non-supported storage engine (for now probably anything besides S3).","title":"Workflow Spec Validation"},{"location":"proposals/artifact-gc-proposal/#documentation","text":"Need to clarify certain things in our documentation: Users need to know that if they don't name their artifacts with unique keys, they risk the same key being deleted by one Workflow and created by another at the same time. One recommendation is to parametrize the key, e.g. {{workflow.uid}}/hello.txt . Requirement to specify Service Account or Annotation for ArtifactGC specifically if they are needed (we won't fall back to default Workflow SA/annotations). Also, the Service Account needs to either be bound to the \"agent\" role or otherwise allow the same access to ArtifactGCTasks .","title":"Documentation"},{"location":"proposals/cron-wf-improvement-proposal/","text":"Proposal for Cron Workflows improvements \u00b6 Introduction \u00b6 Currently, CronWorkflows are a great resource if we want to run recurring tasks to infinity. However, it is missing the ability to customize it, for example define how many times a workflow should run or how to handle multiple failures. I believe argo workflows would benefit of having more configuration options for cron workflows, to allow to change its behavior based on the result of its child\u2019s success or failures. Below I present my thoughts on how we could improve them, but also some questions and concerns on how to properly do it. Proposal \u00b6 This proposal discusses the viability of adding 2 more fields into the cron workflow configuration: RunStrategy : maxSuccess : maxFailures : maxSuccess - defines how many child workflows must have success before suspending the workflow schedule maxFailures - defines how many child workflows must fail before suspending the workflow scheduling. This may contain Failed workflows, Errored workflows or spec errors. For example, if we want to run a workflow just once, we could just set: RunStrategy : maxSuccess : 1 This configuration will make sure the controller will keep scheduling workflows until one of them finishes with success. As another example, if we want to stop scheduling workflows when they keep failing, we could configure the CronWorkflow with: RunStrategy : maxFailures : 2 This config will stop scheduling workflows if fails twice. Total vs consecutive \u00b6 One aspect that needs to be discussed is whether these configurations apply to the entire life of a cron Workflow or just in consecutive schedules. For example, if we configure a workflow to stop scheduling after 2 failures, I think it makes sense to have this applied when it fails twice consecutively. Otherwise, we can have 2 outages in different periods which will suspend the workflow. On the other hand, when configuring a workflow to run twice with success, it would make more sense to have it execute with success regardless of whether it is a consecutive success or not. If we have an outage after the first workflow succeeds, which translates into failed workflows, it should need to execute with success only once. So I think it would make sense to have: maxFailures - maximum number of consecutive failures before stopping the scheduling of a workflow maxSuccess - maximum number of workflows with success. How to store state \u00b6 Since we need to control how many child workflows had success/failure we must store state. With this some questions arise: Should we just store it through the lifetime of the controller or should we store it to a database? Probably only makes sense if we can backup the state somewhere (like a BD). However, I don't have enough knowledge about workflow's architecture to tell how good of an idea this is. If a CronWorkflow gets re-applied, does it maintain or reset the number of success/failures? I guess it should reset since a configuration change should be seen as a new start. How to stop the workflow \u00b6 Once the configured number of failures or successes is reached, it is necessary to stop the workflow scheduling. I believe we have 3 options: Delete the workflow: In my opinion, this is the worst option and goes against gitops principles. Suspend it (set suspend=true): the workflow spec is changed to have the workflow suspended. I may be wrong but this conflicts with gitops as well. Stop scheduling it: The workflow spec is the same. The controller needs to check if the max number of runs was already attained and skip scheduling if it did. Option 3 seems to be the only possibility. After reaching the max configured executions, the cron workflow would exist forever but never scheduled. Maybe we could add a new status field, like Inactive and have something the UI to show it? How to handle suspended workflows \u00b6 One possible case that comes to mind is a long outage where all workflows are failing. For example, imagine a workflow that needs to download a file from some storage and for some reason that storage is down. Workflows will keep getting scheduled but they are going to fail. If they fail the number of configured maxFailures , the workflows gets stopped forever. Once the storage is back up, how can the user enable the workflow again? Manually re-create the workflow: could be an issue if the user has multiple cron workflows Instead of stopping the workflow scheduling, introduce a back-off period as suggested by #7291 . Or maybe allow both configurations. I believe option 2 would allow the user to select if they want to stop scheduling or not. If they do, when cron workflows are wrongfully halted, they will need to manually start them again. If they don't, Argo will only introduce a back-off period between schedules to avoid rescheduling workflows that are just going to fail. Spec could look something like: RunStrategy : maxSuccess : maxFailures : value : # this would be optional back-off : enabled : true factor : 2 With this configuration the user could configure 3 behaviors: set value if they wanted to stop scheduling a workflow after a maximum number of consecutive failures. set value and back-off if they wanted to stop scheduling a workflow after a maximum number of consecutive failures but with a back-off period between each failure set back-off if they want a back-off period between each failure but they never want to stop the workflow scheduling. Wrap up \u00b6 I believe this feature would enhance the cron workflows to allow more specific use cases that are commonly requested by the community, such as running a workflow only once. This proposal raises some concerns on how to properly implement it and I would like to know the maintainers/contributors opinion on these 4 topics, but also some other issues that I couldn't think of. Resources \u00b6 This discussion was prompted by #10620 A first approach to this problem was discussed in 5659 A draft PR to implement the first approach #5662","title":"Proposal for Cron Workflows improvements"},{"location":"proposals/cron-wf-improvement-proposal/#proposal-for-cron-workflows-improvements","text":"","title":"Proposal for Cron Workflows improvements"},{"location":"proposals/cron-wf-improvement-proposal/#introduction","text":"Currently, CronWorkflows are a great resource if we want to run recurring tasks to infinity. However, it is missing the ability to customize it, for example define how many times a workflow should run or how to handle multiple failures. I believe argo workflows would benefit of having more configuration options for cron workflows, to allow to change its behavior based on the result of its child\u2019s success or failures. Below I present my thoughts on how we could improve them, but also some questions and concerns on how to properly do it.","title":"Introduction"},{"location":"proposals/cron-wf-improvement-proposal/#proposal","text":"This proposal discusses the viability of adding 2 more fields into the cron workflow configuration: RunStrategy : maxSuccess : maxFailures : maxSuccess - defines how many child workflows must have success before suspending the workflow schedule maxFailures - defines how many child workflows must fail before suspending the workflow scheduling. This may contain Failed workflows, Errored workflows or spec errors. For example, if we want to run a workflow just once, we could just set: RunStrategy : maxSuccess : 1 This configuration will make sure the controller will keep scheduling workflows until one of them finishes with success. As another example, if we want to stop scheduling workflows when they keep failing, we could configure the CronWorkflow with: RunStrategy : maxFailures : 2 This config will stop scheduling workflows if fails twice.","title":"Proposal"},{"location":"proposals/cron-wf-improvement-proposal/#total-vs-consecutive","text":"One aspect that needs to be discussed is whether these configurations apply to the entire life of a cron Workflow or just in consecutive schedules. For example, if we configure a workflow to stop scheduling after 2 failures, I think it makes sense to have this applied when it fails twice consecutively. Otherwise, we can have 2 outages in different periods which will suspend the workflow. On the other hand, when configuring a workflow to run twice with success, it would make more sense to have it execute with success regardless of whether it is a consecutive success or not. If we have an outage after the first workflow succeeds, which translates into failed workflows, it should need to execute with success only once. So I think it would make sense to have: maxFailures - maximum number of consecutive failures before stopping the scheduling of a workflow maxSuccess - maximum number of workflows with success.","title":"Total vs consecutive"},{"location":"proposals/cron-wf-improvement-proposal/#how-to-store-state","text":"Since we need to control how many child workflows had success/failure we must store state. With this some questions arise: Should we just store it through the lifetime of the controller or should we store it to a database? Probably only makes sense if we can backup the state somewhere (like a BD). However, I don't have enough knowledge about workflow's architecture to tell how good of an idea this is. If a CronWorkflow gets re-applied, does it maintain or reset the number of success/failures? I guess it should reset since a configuration change should be seen as a new start.","title":"How to store state"},{"location":"proposals/cron-wf-improvement-proposal/#how-to-stop-the-workflow","text":"Once the configured number of failures or successes is reached, it is necessary to stop the workflow scheduling. I believe we have 3 options: Delete the workflow: In my opinion, this is the worst option and goes against gitops principles. Suspend it (set suspend=true): the workflow spec is changed to have the workflow suspended. I may be wrong but this conflicts with gitops as well. Stop scheduling it: The workflow spec is the same. The controller needs to check if the max number of runs was already attained and skip scheduling if it did. Option 3 seems to be the only possibility. After reaching the max configured executions, the cron workflow would exist forever but never scheduled. Maybe we could add a new status field, like Inactive and have something the UI to show it?","title":"How to stop the workflow"},{"location":"proposals/cron-wf-improvement-proposal/#how-to-handle-suspended-workflows","text":"One possible case that comes to mind is a long outage where all workflows are failing. For example, imagine a workflow that needs to download a file from some storage and for some reason that storage is down. Workflows will keep getting scheduled but they are going to fail. If they fail the number of configured maxFailures , the workflows gets stopped forever. Once the storage is back up, how can the user enable the workflow again? Manually re-create the workflow: could be an issue if the user has multiple cron workflows Instead of stopping the workflow scheduling, introduce a back-off period as suggested by #7291 . Or maybe allow both configurations. I believe option 2 would allow the user to select if they want to stop scheduling or not. If they do, when cron workflows are wrongfully halted, they will need to manually start them again. If they don't, Argo will only introduce a back-off period between schedules to avoid rescheduling workflows that are just going to fail. Spec could look something like: RunStrategy : maxSuccess : maxFailures : value : # this would be optional back-off : enabled : true factor : 2 With this configuration the user could configure 3 behaviors: set value if they wanted to stop scheduling a workflow after a maximum number of consecutive failures. set value and back-off if they wanted to stop scheduling a workflow after a maximum number of consecutive failures but with a back-off period between each failure set back-off if they want a back-off period between each failure but they never want to stop the workflow scheduling.","title":"How to handle suspended workflows"},{"location":"proposals/cron-wf-improvement-proposal/#wrap-up","text":"I believe this feature would enhance the cron workflows to allow more specific use cases that are commonly requested by the community, such as running a workflow only once. This proposal raises some concerns on how to properly implement it and I would like to know the maintainers/contributors opinion on these 4 topics, but also some other issues that I couldn't think of.","title":"Wrap up"},{"location":"proposals/cron-wf-improvement-proposal/#resources","text":"This discussion was prompted by #10620 A first approach to this problem was discussed in 5659 A draft PR to implement the first approach #5662","title":"Resources"},{"location":"proposals/makefile-improvement-proposal/","text":"Proposal for Makefile improvements \u00b6 Introduction \u00b6 The motivation for this proposal is to enable developers working on Argo Workflows to use build tools in a more reproducible way. Currently the Makefile is unfortunately too opinionated and as a result is often a blocker when first setting up Argo Workflows locally. I believe we should shrink the responsibilities of the Makefile and where possible outsource areas of responsibility to more specialized technology, such as Devenv/Nix in the case of dependency management. Proposal Specifics \u00b6 In order to better address reproducibility, it is better to split up the duties the Makefile currently performs into various sub components, that can be assembled in more appropriate technology. One important aspect here is to completely shift the responsibility of dependency management away from the Makefile and into technology such as Nix or Devenv. This proposal will also enable quicker access to a development build of Argo Workflows to developers, reducing the costs of on-boarding and barrier to entry. Devenv \u00b6 Benefits of Devenv \u00b6 Reproducible build environment Ability to run processes Disadvantages of Devenv \u00b6 Huge learning curve to tap into Nix functionality Less documentation Nix \u00b6 Benefits of Nix \u00b6 Reproducible build environment Direct raw control of various Nix related functionality instead of using Devenv More documentation Disadvantages of Nix \u00b6 Huge learning curve Recommendation \u00b6 I suggest that we use Nix over Devenv. I believe that our build environment is unique enough that we will be tapping into Nix anyway, it probably makes sense to directly use Nix in that case. Proposal \u00b6 In order to maximize the benefit we receive from using something like Nix, I suggest that we initially start off with a modest change to the Makefile. The first proposal would be to remove out all dependency management code and replace this functionality with Nix, where it is trivially possible. This may not be possible for some go lang related binaries we use, we will retain the Makefile functionality in those cases, at least for a while. Eventually we will migrate more and more of this responsibility away from the Makefile. Following Nix being responsible for all dependency management, we could start to consider moving more of our build system itself into Nix, perhaps it is easiest to start off with UI build as it is relatively painless. However, do note that this is not a requirement, I do not see a problem with the Makefile and the Nix file co-existing, it is more about finding a good balance between the reproducibility we desire and the effort we put into obtaining said reproducibility. An example for a replacement could be this dependency for example, note that we do not state any version here, replacing such installations with Nix based installations will ensure that we can ensure that if a build works on a certain developer's machine, it should also work on every other machine as well. What will Nix get us? \u00b6 As mentioned previously Nix gets us closer to reproducible build environments. It should ease significantly the on-boarding process of developers onto the project. There have been several developers who wanted to work on Argo Workflows but found the Makefile to be a barrier, it is likely that there are more developers on this boat. With a reproducible build environment, we hope that everyone who would like to contribute to the project is able to do so easily. It should also save time for engineers on-boarding onto the project, especially if they are using a system that is not Ubuntu or OSX. What will Nix cost us? \u00b6 If we proceed further with Nix, it will require some amount of people working on Argo Workflows to learn it, this is not a trivial task by any means. It will increase the barrier when it comes to changes that are build related, however, this isn't necessarily bad as build related changes should be far less frequent, the friction we will endure here is likely manageable. How will developers use nix? \u00b6 In the case that both Nix and the Makefile co-exist, we could use nix inside the Makefile itself. The Makefile calls into Nix to setup a developer environment with all dependencies, it will then continue the rest of the Makefile execution as normal. Following a complete or near complete migration to Nix, we can use nix-build for more of our tasks. An example of a C++ project environment is provided here Resources \u00b6 Nix Manual - Go Devenv How to Learn Nix","title":"Proposal for Makefile improvements"},{"location":"proposals/makefile-improvement-proposal/#proposal-for-makefile-improvements","text":"","title":"Proposal for Makefile improvements"},{"location":"proposals/makefile-improvement-proposal/#introduction","text":"The motivation for this proposal is to enable developers working on Argo Workflows to use build tools in a more reproducible way. Currently the Makefile is unfortunately too opinionated and as a result is often a blocker when first setting up Argo Workflows locally. I believe we should shrink the responsibilities of the Makefile and where possible outsource areas of responsibility to more specialized technology, such as Devenv/Nix in the case of dependency management.","title":"Introduction"},{"location":"proposals/makefile-improvement-proposal/#proposal-specifics","text":"In order to better address reproducibility, it is better to split up the duties the Makefile currently performs into various sub components, that can be assembled in more appropriate technology. One important aspect here is to completely shift the responsibility of dependency management away from the Makefile and into technology such as Nix or Devenv. This proposal will also enable quicker access to a development build of Argo Workflows to developers, reducing the costs of on-boarding and barrier to entry.","title":"Proposal Specifics"},{"location":"proposals/makefile-improvement-proposal/#devenv","text":"","title":"Devenv"},{"location":"proposals/makefile-improvement-proposal/#benefits-of-devenv","text":"Reproducible build environment Ability to run processes","title":"Benefits of Devenv"},{"location":"proposals/makefile-improvement-proposal/#disadvantages-of-devenv","text":"Huge learning curve to tap into Nix functionality Less documentation","title":"Disadvantages of Devenv"},{"location":"proposals/makefile-improvement-proposal/#nix","text":"","title":"Nix"},{"location":"proposals/makefile-improvement-proposal/#benefits-of-nix","text":"Reproducible build environment Direct raw control of various Nix related functionality instead of using Devenv More documentation","title":"Benefits of Nix"},{"location":"proposals/makefile-improvement-proposal/#disadvantages-of-nix","text":"Huge learning curve","title":"Disadvantages of Nix"},{"location":"proposals/makefile-improvement-proposal/#recommendation","text":"I suggest that we use Nix over Devenv. I believe that our build environment is unique enough that we will be tapping into Nix anyway, it probably makes sense to directly use Nix in that case.","title":"Recommendation"},{"location":"proposals/makefile-improvement-proposal/#proposal","text":"In order to maximize the benefit we receive from using something like Nix, I suggest that we initially start off with a modest change to the Makefile. The first proposal would be to remove out all dependency management code and replace this functionality with Nix, where it is trivially possible. This may not be possible for some go lang related binaries we use, we will retain the Makefile functionality in those cases, at least for a while. Eventually we will migrate more and more of this responsibility away from the Makefile. Following Nix being responsible for all dependency management, we could start to consider moving more of our build system itself into Nix, perhaps it is easiest to start off with UI build as it is relatively painless. However, do note that this is not a requirement, I do not see a problem with the Makefile and the Nix file co-existing, it is more about finding a good balance between the reproducibility we desire and the effort we put into obtaining said reproducibility. An example for a replacement could be this dependency for example, note that we do not state any version here, replacing such installations with Nix based installations will ensure that we can ensure that if a build works on a certain developer's machine, it should also work on every other machine as well.","title":"Proposal"},{"location":"proposals/makefile-improvement-proposal/#what-will-nix-get-us","text":"As mentioned previously Nix gets us closer to reproducible build environments. It should ease significantly the on-boarding process of developers onto the project. There have been several developers who wanted to work on Argo Workflows but found the Makefile to be a barrier, it is likely that there are more developers on this boat. With a reproducible build environment, we hope that everyone who would like to contribute to the project is able to do so easily. It should also save time for engineers on-boarding onto the project, especially if they are using a system that is not Ubuntu or OSX.","title":"What will Nix get us?"},{"location":"proposals/makefile-improvement-proposal/#what-will-nix-cost-us","text":"If we proceed further with Nix, it will require some amount of people working on Argo Workflows to learn it, this is not a trivial task by any means. It will increase the barrier when it comes to changes that are build related, however, this isn't necessarily bad as build related changes should be far less frequent, the friction we will endure here is likely manageable.","title":"What will Nix cost us?"},{"location":"proposals/makefile-improvement-proposal/#how-will-developers-use-nix","text":"In the case that both Nix and the Makefile co-exist, we could use nix inside the Makefile itself. The Makefile calls into Nix to setup a developer environment with all dependencies, it will then continue the rest of the Makefile execution as normal. Following a complete or near complete migration to Nix, we can use nix-build for more of our tasks. An example of a C++ project environment is provided here","title":"How will developers use nix?"},{"location":"proposals/makefile-improvement-proposal/#resources","text":"Nix Manual - Go Devenv How to Learn Nix","title":"Resources"},{"location":"use-cases/ci-cd/","text":"CI/CD \u00b6 Docs \u00b6 Quick start and training Learn about webhooks for triggering pipelines. Head to the Argo CD docs. Videos \u00b6 Distributed Load Testing Using Argo Workflows - Sumit Nagal (Intuit) CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows How LitmusChaos uses Argo Workflows Tekton vs. Argo Workflows - Kubernetes-Native CI/CD Pipelines","title":"CI/CD"},{"location":"use-cases/ci-cd/#cicd","text":"","title":"CI/CD"},{"location":"use-cases/ci-cd/#docs","text":"Quick start and training Learn about webhooks for triggering pipelines. Head to the Argo CD docs.","title":"Docs"},{"location":"use-cases/ci-cd/#videos","text":"Distributed Load Testing Using Argo Workflows - Sumit Nagal (Intuit) CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows How LitmusChaos uses Argo Workflows Tekton vs. Argo Workflows - Kubernetes-Native CI/CD Pipelines","title":"Videos"},{"location":"use-cases/data-processing/","text":"Data Processing \u00b6 Docs \u00b6 Quick start and training Videos \u00b6 Running a Data Replication Pipeline on Kubernetes with Argo and Singer.io Books \u00b6 Distributed Machine Learning Patterns (see Chapter 2 on data processing/ingestion patterns)","title":"Data Processing"},{"location":"use-cases/data-processing/#data-processing","text":"","title":"Data Processing"},{"location":"use-cases/data-processing/#docs","text":"Quick start and training","title":"Docs"},{"location":"use-cases/data-processing/#videos","text":"Running a Data Replication Pipeline on Kubernetes with Argo and Singer.io","title":"Videos"},{"location":"use-cases/data-processing/#books","text":"Distributed Machine Learning Patterns (see Chapter 2 on data processing/ingestion patterns)","title":"Books"},{"location":"use-cases/infrastructure-automation/","text":"Infrastructure Automation \u00b6 Docs \u00b6 Quick start and training Head to the Argo Events docs. Videos \u00b6 Infrastructure Automation with Argo at InsideBoard - Alexandre Le Mao (Head of infrastructure / Lead DevOps, InsideBoard) Argo and KNative - David Breitgand (IBM) - showing 5G infra automation use case How New Relic Uses Argo Workflows - Fischer Jemison, Jared Welch (New Relic) Building Kubernetes using Kubernetes - Tomas Valasek (SAP Concur)","title":"Infrastructure Automation"},{"location":"use-cases/infrastructure-automation/#infrastructure-automation","text":"","title":"Infrastructure Automation"},{"location":"use-cases/infrastructure-automation/#docs","text":"Quick start and training Head to the Argo Events docs.","title":"Docs"},{"location":"use-cases/infrastructure-automation/#videos","text":"Infrastructure Automation with Argo at InsideBoard - Alexandre Le Mao (Head of infrastructure / Lead DevOps, InsideBoard) Argo and KNative - David Breitgand (IBM) - showing 5G infra automation use case How New Relic Uses Argo Workflows - Fischer Jemison, Jared Welch (New Relic) Building Kubernetes using Kubernetes - Tomas Valasek (SAP Concur)","title":"Videos"},{"location":"use-cases/machine-learning/","text":"Machine Learning \u00b6 Docs \u00b6 Quick start and training Try out the updated Python and Java SDKs . Authoring and Submitting Argo Workflows using Python Head to the Kubeflow docs . Videos \u00b6 Automating Research Workflows at BlackRock Bridging into Python Ecosystem with Cloud-Native Distributed Machine Learning Pipelines Building Medical Grade AI with Argo Workflows CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows Dynamic, Event-Driven Machine Learning Pipelines with Argo Workflows Machine Learning as Code: GitOps for ML with Kubeflow and Argo CD Machine Learning with Argo and Ploomber Making Complex R Forecast Applications Into Production Using Argo Workflows MLOps at TripAdvisor: ML Models CI/CD Automation with Argo - Ang Gao (Principal Software Engineer, TripAdvisor) Towards Cloud-Native Distributed Machine Learning Pipelines at Scale Books \u00b6 Distributed Machine Learning Patterns","title":"Machine Learning"},{"location":"use-cases/machine-learning/#machine-learning","text":"","title":"Machine Learning"},{"location":"use-cases/machine-learning/#docs","text":"Quick start and training Try out the updated Python and Java SDKs . Authoring and Submitting Argo Workflows using Python Head to the Kubeflow docs .","title":"Docs"},{"location":"use-cases/machine-learning/#videos","text":"Automating Research Workflows at BlackRock Bridging into Python Ecosystem with Cloud-Native Distributed Machine Learning Pipelines Building Medical Grade AI with Argo Workflows CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows Dynamic, Event-Driven Machine Learning Pipelines with Argo Workflows Machine Learning as Code: GitOps for ML with Kubeflow and Argo CD Machine Learning with Argo and Ploomber Making Complex R Forecast Applications Into Production Using Argo Workflows MLOps at TripAdvisor: ML Models CI/CD Automation with Argo - Ang Gao (Principal Software Engineer, TripAdvisor) Towards Cloud-Native Distributed Machine Learning Pipelines at Scale","title":"Videos"},{"location":"use-cases/machine-learning/#books","text":"Distributed Machine Learning Patterns","title":"Books"},{"location":"use-cases/other/","text":"Other \u00b6 Argo can also be used for many other use-cases. Docs \u00b6 Quick start and training A Curated List of Awesome Projects and Resources Related to Argo","title":"Other"},{"location":"use-cases/other/#other","text":"Argo can also be used for many other use-cases.","title":"Other"},{"location":"use-cases/other/#docs","text":"Quick start and training A Curated List of Awesome Projects and Resources Related to Argo","title":"Docs"},{"location":"use-cases/stream-processing/","text":"Stream Processing \u00b6 Head to the ArgoLabs Dataflow docs.","title":"Stream Processing"},{"location":"use-cases/stream-processing/#stream-processing","text":"Head to the ArgoLabs Dataflow docs.","title":"Stream Processing"},{"location":"use-cases/webhdfs/","text":"webHDFS via HTTP artifacts \u00b6 webHDFS is a protocol allowing to access Hadoop or similar data storage via a unified REST API. Input Artifacts \u00b6 You can use HTTP artifacts to connect to webHDFS, where the URL will be the webHDFS endpoint including the file path and any query parameters. Suppose your webHDFS endpoint is available under https://mywebhdfsprovider.com/webhdfs/v1/ and you have a file my-art.txt located in a data folder, which you want to use as an input artifact. To construct the URL, you append the file path to the base webHDFS endpoint and set the OPEN operation via query parameter. The result is: https://mywebhdfsprovider.com/webhdfs/v1/data/my-art.txt?op=OPEN . See the below Workflow which will download the specified webHDFS artifact into the specified path : spec : # ... inputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/file.txt?op=OPEN\" Additional fields can be set for HTTP artifacts (for example, headers). See usage in the full webHDFS example . Output Artifacts \u00b6 To declare a webHDFS output artifact, instead use the CREATE operation and set the file path to your desired location. In the below example, the artifact will be stored at outputs/newfile.txt . You can overwrite existing files with overwrite=true . spec : # ... outputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/outputs/newfile.txt?op=CREATE&overwrite=true\" Authentication \u00b6 The above examples show minimal use cases without authentication. However, in a real-world scenario, you may want to use authentication. The authentication mechanism is limited to those supported by HTTP artifacts: HTTP Basic Auth OAuth2 Client Certificates Examples for the latter two mechanisms can be found in the full webHDFS example . Provider dependent While your webHDFS provider may support the above mechanisms, Hadoop itself only supports authentication via Kerberos SPNEGO and Hadoop delegation token. HTTP artifacts do not currently support SPNEGO, but delegation tokens can be used via the delegation query parameter.","title":"webHDFS via HTTP artifacts"},{"location":"use-cases/webhdfs/#webhdfs-via-http-artifacts","text":"webHDFS is a protocol allowing to access Hadoop or similar data storage via a unified REST API.","title":"webHDFS via HTTP artifacts"},{"location":"use-cases/webhdfs/#input-artifacts","text":"You can use HTTP artifacts to connect to webHDFS, where the URL will be the webHDFS endpoint including the file path and any query parameters. Suppose your webHDFS endpoint is available under https://mywebhdfsprovider.com/webhdfs/v1/ and you have a file my-art.txt located in a data folder, which you want to use as an input artifact. To construct the URL, you append the file path to the base webHDFS endpoint and set the OPEN operation via query parameter. The result is: https://mywebhdfsprovider.com/webhdfs/v1/data/my-art.txt?op=OPEN . See the below Workflow which will download the specified webHDFS artifact into the specified path : spec : # ... inputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/file.txt?op=OPEN\" Additional fields can be set for HTTP artifacts (for example, headers). See usage in the full webHDFS example .","title":"Input Artifacts"},{"location":"use-cases/webhdfs/#output-artifacts","text":"To declare a webHDFS output artifact, instead use the CREATE operation and set the file path to your desired location. In the below example, the artifact will be stored at outputs/newfile.txt . You can overwrite existing files with overwrite=true . spec : # ... outputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/outputs/newfile.txt?op=CREATE&overwrite=true\"","title":"Output Artifacts"},{"location":"use-cases/webhdfs/#authentication","text":"The above examples show minimal use cases without authentication. However, in a real-world scenario, you may want to use authentication. The authentication mechanism is limited to those supported by HTTP artifacts: HTTP Basic Auth OAuth2 Client Certificates Examples for the latter two mechanisms can be found in the full webHDFS example . Provider dependent While your webHDFS provider may support the above mechanisms, Hadoop itself only supports authentication via Kerberos SPNEGO and Hadoop delegation token. HTTP artifacts do not currently support SPNEGO, but delegation tokens can be used via the delegation query parameter.","title":"Authentication"},{"location":"walk-through/","text":"About \u00b6 Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed using kubectl and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo software is light-weight and installs in under a minute, and provides complete workflow features including parameter substitution, artifacts, fixtures, loops and recursive workflows. Dozens of examples are available in the examples directory on GitHub. For a complete description of the Argo workflow spec, please refer to the spec documentation . Progress through these examples in sequence to learn all the basics. Start with Argo CLI .","title":"About"},{"location":"walk-through/#about","text":"Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed using kubectl and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo software is light-weight and installs in under a minute, and provides complete workflow features including parameter substitution, artifacts, fixtures, loops and recursive workflows. Dozens of examples are available in the examples directory on GitHub. For a complete description of the Argo workflow spec, please refer to the spec documentation . Progress through these examples in sequence to learn all the basics. Start with Argo CLI .","title":"About"},{"location":"walk-through/argo-cli/","text":"Argo CLI \u00b6 Installation \u00b6 To install the Argo CLI, follow the instructions on the GitHub Releases page . Usage \u00b6 In case you want to follow along with this walk-through, here's a quick overview of the most useful argo command line interface (CLI) commands. argo submit hello-world.yaml # submit a workflow spec to Kubernetes argo list # list current workflows argo get hello-world-xxx # get info about a specific workflow argo logs hello-world-xxx # print the logs from a workflow argo delete hello-world-xxx # delete workflow You can also run workflow specs directly using kubectl , but the Argo CLI provides syntax checking, nicer output, and requires less typing. See the CLI Reference for more details.","title":"Argo CLI"},{"location":"walk-through/argo-cli/#argo-cli","text":"","title":"Argo CLI"},{"location":"walk-through/argo-cli/#installation","text":"To install the Argo CLI, follow the instructions on the GitHub Releases page .","title":"Installation"},{"location":"walk-through/argo-cli/#usage","text":"In case you want to follow along with this walk-through, here's a quick overview of the most useful argo command line interface (CLI) commands. argo submit hello-world.yaml # submit a workflow spec to Kubernetes argo list # list current workflows argo get hello-world-xxx # get info about a specific workflow argo logs hello-world-xxx # print the logs from a workflow argo delete hello-world-xxx # delete workflow You can also run workflow specs directly using kubectl , but the Argo CLI provides syntax checking, nicer output, and requires less typing. See the CLI Reference for more details.","title":"Usage"},{"location":"walk-through/artifacts/","text":"Artifacts \u00b6 Note You will need to configure an artifact repository to run this example. When running workflows, it is very common to have steps that generate or consume artifacts. Often, the output artifacts of one step may be used as input artifacts to a subsequent step. The below workflow spec consists of two steps that run in sequence. The first step named generate-artifact will generate an artifact using the whalesay template that will be consumed by the second step named print-message that then consumes the generated artifact. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-passing- spec : entrypoint : artifact-example templates : - name : artifact-example steps : - - name : generate-artifact template : whalesay - - name : consume-artifact template : print-message arguments : artifacts : # bind message to the hello-art artifact # generated by the generate-artifact step - name : message from : \"{{steps.generate-artifact.outputs.artifacts.hello-art}}\" - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"cowsay hello world | tee /tmp/hello_world.txt\" ] outputs : artifacts : # generate hello-art artifact from /tmp/hello_world.txt # artifacts can be directories as well as files - name : hello-art path : /tmp/hello_world.txt - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at /tmp/message - name : message path : /tmp/message container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/message\" ] The whalesay template uses the cowsay command to generate a file named /tmp/hello-world.txt . It then outputs this file as an artifact named hello-art . In general, the artifact's path may be a directory rather than just a file. The print-message template takes an input artifact named message , unpacks it at the path named /tmp/message and then prints the contents of /tmp/message using the cat command. The artifact-example template passes the hello-art artifact generated as an output of the generate-artifact step as the message input artifact to the print-message step. DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-artifact.outputs.artifacts.hello-art}} . Optionally, for large artifacts, you can set podSpecPatch in the workflow spec to increase the resource request for the init container and avoid any Out of memory issues. <... snipped ...> - name : large-artifact # below patch gets merged with the actual pod spec and increses the memory # request of the init container. podSpecPatch : | initContainers: - name: init resources: requests: memory: 2Gi cpu: 300m inputs : artifacts : - name : data path : /tmp/large-file container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/large-file\" ] <... snipped ...> Artifacts are packaged as Tarballs and gzipped by default. You may customize this behavior by specifying an archive strategy, using the archive field. For example: <... snipped ...> outputs : artifacts : # default behavior - tar+gzip default compression. - name : hello-art-1 path : /tmp/hello_world.txt # disable archiving entirely - upload the file / directory as is. # this is useful when the container layout matches the desired target repository layout. - name : hello-art-2 path : /tmp/hello_world.txt archive : none : {} # customize the compression behavior (disabling it here). # this is useful for files with varying compression benefits, # e.g. disabling compression for a cached build workspace and large binaries, # or increasing compression for \"perfect\" textual data - like a json/xml export of a large database. - name : hello-art-3 path : /tmp/hello_world.txt archive : tar : # no compression (also accepts the standard gzip 1 to 9 values) compressionLevel : 0 <... snipped ...> Artifact Garbage Collection \u00b6 As of version 3.4 you can configure your Workflow to automatically delete Artifacts that you don't need (visit artifact repository capability for the current supported store engine). Artifacts can be deleted OnWorkflowCompletion or OnWorkflowDeletion . You can specify your Garbage Collection strategy on both the Workflow level and the Artifact level, so for example, you may have temporary artifacts that can be deleted right away but a final output that should be persisted: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion # default Strategy set here applies to all Artifacts by default templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact.txt - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this.txt artifactGC : strategy : Never # optional override for an Artifact Artifact Naming \u00b6 Consider parameterizing your S3 keys by {{workflow.uid}}, etc (as shown in the example above) if there's a possibility that you could have concurrent Workflows of the same spec. This would be to avoid a scenario in which the artifact from one Workflow is being deleted while the same S3 key is being generated for a different Workflow. Service Accounts and Annotations \u00b6 Does your S3 bucket require you to run with a special Service Account or IAM Role Annotation? You can either use the same ones you use for creating artifacts or generate new ones that are specific for deletion permission. Generally users will probably just have a single Service Account or IAM Role to apply to all artifacts for the Workflow, but you can also customize on the artifact level if you need that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion ############################################################################################## # Workflow Level Service Account and Metadata ############################################################################################## serviceAccountName : my-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/my-iam-role templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact-{{workflow.uid}}.txt artifactGC : #################################################################################### # Optional override capability #################################################################################### serviceAccountName : artifact-specific-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/artifact-specific-iam-role - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this-{{workflow.uid}}.txt artifactGC : strategy : Never If you do supply your own Service Account you will need to create a RoleBinding that binds it with a role like this: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : annotations : workflows.argoproj.io/description : | This is the minimum recommended permissions needed if you want to use artifact GC. name : artifactgc rules : - apiGroups : - argoproj.io resources : - workflowartifactgctasks verbs : - list - watch - apiGroups : - argoproj.io resources : - workflowartifactgctasks/status verbs : - patch This is the artifactgc role if you installed using one of the quick-start manifest files. If you installed with the install.yaml file for the release then the same permissions are in the argo-cluster-role . If you don't use your own ServiceAccount and are just using default ServiceAccount, then the role needs a RoleBinding or ClusterRoleBinding to default ServiceAccount. What happens if Garbage Collection fails? \u00b6 If deletion of the artifact fails for some reason (other than the Artifact already having been deleted which is not considered a failure), the Workflow's Status will be marked with a new Condition to indicate \"Artifact GC Failure\", a Kubernetes Event will be issued, and the Argo Server UI will also indicate the failure. For additional debugging, the user should find 1 or more Pods named -artgc-* and can view the logs. If the user needs to delete the Workflow and its child CRD objects, they will need to patch the Workflow to remove the finalizer preventing the deletion: apiVersion : argoproj.io/v1alpha1 kind : Workflow finalizers : - workflows.argoproj.io/artifact-gc The finalizer can be deleted by doing: kubectl patch workflow my-wf \\ --type json \\ --patch = '[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]' Or for simplicity use the Argo CLI argo delete command with flag --force , which under the hood removes the finalizer before performing the deletion. Release Versions >= 3.5 \u00b6 A flag has been added to the Workflow Spec called forceFinalizerRemoval (see here ) to force the finalizer's removal even if Artifact GC fails: spec : artifactGC : strategy : OnWorkflowDeletion forceFinalizerRemoval : true","title":"Artifacts"},{"location":"walk-through/artifacts/#artifacts","text":"Note You will need to configure an artifact repository to run this example. When running workflows, it is very common to have steps that generate or consume artifacts. Often, the output artifacts of one step may be used as input artifacts to a subsequent step. The below workflow spec consists of two steps that run in sequence. The first step named generate-artifact will generate an artifact using the whalesay template that will be consumed by the second step named print-message that then consumes the generated artifact. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-passing- spec : entrypoint : artifact-example templates : - name : artifact-example steps : - - name : generate-artifact template : whalesay - - name : consume-artifact template : print-message arguments : artifacts : # bind message to the hello-art artifact # generated by the generate-artifact step - name : message from : \"{{steps.generate-artifact.outputs.artifacts.hello-art}}\" - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"cowsay hello world | tee /tmp/hello_world.txt\" ] outputs : artifacts : # generate hello-art artifact from /tmp/hello_world.txt # artifacts can be directories as well as files - name : hello-art path : /tmp/hello_world.txt - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at /tmp/message - name : message path : /tmp/message container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/message\" ] The whalesay template uses the cowsay command to generate a file named /tmp/hello-world.txt . It then outputs this file as an artifact named hello-art . In general, the artifact's path may be a directory rather than just a file. The print-message template takes an input artifact named message , unpacks it at the path named /tmp/message and then prints the contents of /tmp/message using the cat command. The artifact-example template passes the hello-art artifact generated as an output of the generate-artifact step as the message input artifact to the print-message step. DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-artifact.outputs.artifacts.hello-art}} . Optionally, for large artifacts, you can set podSpecPatch in the workflow spec to increase the resource request for the init container and avoid any Out of memory issues. <... snipped ...> - name : large-artifact # below patch gets merged with the actual pod spec and increses the memory # request of the init container. podSpecPatch : | initContainers: - name: init resources: requests: memory: 2Gi cpu: 300m inputs : artifacts : - name : data path : /tmp/large-file container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/large-file\" ] <... snipped ...> Artifacts are packaged as Tarballs and gzipped by default. You may customize this behavior by specifying an archive strategy, using the archive field. For example: <... snipped ...> outputs : artifacts : # default behavior - tar+gzip default compression. - name : hello-art-1 path : /tmp/hello_world.txt # disable archiving entirely - upload the file / directory as is. # this is useful when the container layout matches the desired target repository layout. - name : hello-art-2 path : /tmp/hello_world.txt archive : none : {} # customize the compression behavior (disabling it here). # this is useful for files with varying compression benefits, # e.g. disabling compression for a cached build workspace and large binaries, # or increasing compression for \"perfect\" textual data - like a json/xml export of a large database. - name : hello-art-3 path : /tmp/hello_world.txt archive : tar : # no compression (also accepts the standard gzip 1 to 9 values) compressionLevel : 0 <... snipped ...>","title":"Artifacts"},{"location":"walk-through/artifacts/#artifact-garbage-collection","text":"As of version 3.4 you can configure your Workflow to automatically delete Artifacts that you don't need (visit artifact repository capability for the current supported store engine). Artifacts can be deleted OnWorkflowCompletion or OnWorkflowDeletion . You can specify your Garbage Collection strategy on both the Workflow level and the Artifact level, so for example, you may have temporary artifacts that can be deleted right away but a final output that should be persisted: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion # default Strategy set here applies to all Artifacts by default templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact.txt - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this.txt artifactGC : strategy : Never # optional override for an Artifact","title":"Artifact Garbage Collection"},{"location":"walk-through/artifacts/#artifact-naming","text":"Consider parameterizing your S3 keys by {{workflow.uid}}, etc (as shown in the example above) if there's a possibility that you could have concurrent Workflows of the same spec. This would be to avoid a scenario in which the artifact from one Workflow is being deleted while the same S3 key is being generated for a different Workflow.","title":"Artifact Naming"},{"location":"walk-through/artifacts/#service-accounts-and-annotations","text":"Does your S3 bucket require you to run with a special Service Account or IAM Role Annotation? You can either use the same ones you use for creating artifacts or generate new ones that are specific for deletion permission. Generally users will probably just have a single Service Account or IAM Role to apply to all artifacts for the Workflow, but you can also customize on the artifact level if you need that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion ############################################################################################## # Workflow Level Service Account and Metadata ############################################################################################## serviceAccountName : my-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/my-iam-role templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact-{{workflow.uid}}.txt artifactGC : #################################################################################### # Optional override capability #################################################################################### serviceAccountName : artifact-specific-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/artifact-specific-iam-role - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this-{{workflow.uid}}.txt artifactGC : strategy : Never If you do supply your own Service Account you will need to create a RoleBinding that binds it with a role like this: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : annotations : workflows.argoproj.io/description : | This is the minimum recommended permissions needed if you want to use artifact GC. name : artifactgc rules : - apiGroups : - argoproj.io resources : - workflowartifactgctasks verbs : - list - watch - apiGroups : - argoproj.io resources : - workflowartifactgctasks/status verbs : - patch This is the artifactgc role if you installed using one of the quick-start manifest files. If you installed with the install.yaml file for the release then the same permissions are in the argo-cluster-role . If you don't use your own ServiceAccount and are just using default ServiceAccount, then the role needs a RoleBinding or ClusterRoleBinding to default ServiceAccount.","title":"Service Accounts and Annotations"},{"location":"walk-through/artifacts/#what-happens-if-garbage-collection-fails","text":"If deletion of the artifact fails for some reason (other than the Artifact already having been deleted which is not considered a failure), the Workflow's Status will be marked with a new Condition to indicate \"Artifact GC Failure\", a Kubernetes Event will be issued, and the Argo Server UI will also indicate the failure. For additional debugging, the user should find 1 or more Pods named -artgc-* and can view the logs. If the user needs to delete the Workflow and its child CRD objects, they will need to patch the Workflow to remove the finalizer preventing the deletion: apiVersion : argoproj.io/v1alpha1 kind : Workflow finalizers : - workflows.argoproj.io/artifact-gc The finalizer can be deleted by doing: kubectl patch workflow my-wf \\ --type json \\ --patch = '[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]' Or for simplicity use the Argo CLI argo delete command with flag --force , which under the hood removes the finalizer before performing the deletion.","title":"What happens if Garbage Collection fails?"},{"location":"walk-through/artifacts/#release-versions-35","text":"A flag has been added to the Workflow Spec called forceFinalizerRemoval (see here ) to force the finalizer's removal even if Artifact GC fails: spec : artifactGC : strategy : OnWorkflowDeletion forceFinalizerRemoval : true","title":"Release Versions >= 3.5"},{"location":"walk-through/conditionals/","text":"Conditionals \u00b6 We also support conditional execution. The syntax is implemented by govaluate which offers the support for complex syntax. See in the example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails # call tails template if \"tails\" when : \"{{steps.flip-coin.outputs.result}} == tails\" - - name : flip-again template : flip-coin - - name : complex-condition template : heads-tails-or-twice-tails # call heads template if first flip was \"heads\" and second was \"tails\" OR both were \"tails\" when : >- ( {{steps.flip-coin.outputs.result}} == heads && {{steps.flip-again.outputs.result}} == tails ) || ( {{steps.flip-coin.outputs.result}} == tails && {{steps.flip-again.outputs.result}} == tails ) - name : heads-regex template : heads # call heads template if ~ \"hea\" when : \"{{steps.flip-again.outputs.result}} =~ hea\" - name : tails-regex template : tails # call heads template if ~ \"tai\" when : \"{{steps.flip-again.outputs.result}} =~ tai\" # Return heads or tails based on a random number - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was tails\\\"\" ] - name : heads-tails-or-twice-tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads the first flip and tails the second. Or it was two times tails.\\\"\" ] Nested Quotes If the parameter value contains quotes, it may invalidate the govaluate expression. To handle parameters with quotes, embed an expr expression in the conditional. For example: when : \"{{=inputs.parameters['may-contain-quotes'] == 'example'}}\"","title":"Conditionals"},{"location":"walk-through/conditionals/#conditionals","text":"We also support conditional execution. The syntax is implemented by govaluate which offers the support for complex syntax. See in the example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails # call tails template if \"tails\" when : \"{{steps.flip-coin.outputs.result}} == tails\" - - name : flip-again template : flip-coin - - name : complex-condition template : heads-tails-or-twice-tails # call heads template if first flip was \"heads\" and second was \"tails\" OR both were \"tails\" when : >- ( {{steps.flip-coin.outputs.result}} == heads && {{steps.flip-again.outputs.result}} == tails ) || ( {{steps.flip-coin.outputs.result}} == tails && {{steps.flip-again.outputs.result}} == tails ) - name : heads-regex template : heads # call heads template if ~ \"hea\" when : \"{{steps.flip-again.outputs.result}} =~ hea\" - name : tails-regex template : tails # call heads template if ~ \"tai\" when : \"{{steps.flip-again.outputs.result}} =~ tai\" # Return heads or tails based on a random number - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was tails\\\"\" ] - name : heads-tails-or-twice-tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads the first flip and tails the second. Or it was two times tails.\\\"\" ] Nested Quotes If the parameter value contains quotes, it may invalidate the govaluate expression. To handle parameters with quotes, embed an expr expression in the conditional. For example: when : \"{{=inputs.parameters['may-contain-quotes'] == 'example'}}\"","title":"Conditionals"},{"location":"walk-through/continuous-integration-examples/","text":"Continuous Integration Examples \u00b6 Continuous integration is a popular application for workflows. Some quick examples of CI workflows: https://github.com/argoproj/argo-workflows/tree/main/examples/ci.yaml https://github.com/argoproj/argo-workflows/tree/main/examples/influxdb-ci.yaml And a CI WorkflowTemplate example: https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml A more detailed example is https://github.com/sendible-labs/argo-workflows-ci-example , which allows you to create a local CI workflow for the purposes of learning.","title":"Continuous Integration Examples"},{"location":"walk-through/continuous-integration-examples/#continuous-integration-examples","text":"Continuous integration is a popular application for workflows. Some quick examples of CI workflows: https://github.com/argoproj/argo-workflows/tree/main/examples/ci.yaml https://github.com/argoproj/argo-workflows/tree/main/examples/influxdb-ci.yaml And a CI WorkflowTemplate example: https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml A more detailed example is https://github.com/sendible-labs/argo-workflows-ci-example , which allows you to create a local CI workflow for the purposes of learning.","title":"Continuous Integration Examples"},{"location":"walk-through/custom-template-variable-reference/","text":"Custom Template Variable Reference \u00b6 In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template. Argo will validate and resolve only the variable that starts with an Argo allowed prefix { \"item\", \"steps\", \"inputs\", \"outputs\", \"workflow\", \"tasks\" } apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : custom-template-variable- spec : entrypoint : hello-hello-hello templates : - name : hello-hello-hello steps : - - name : hello1 template : whalesay arguments : parameters : [{ name : message , value : \"hello1\" }] - - name : hello2a template : whalesay arguments : parameters : [{ name : message , value : \"hello2a\" }] - name : hello2b template : whalesay arguments : parameters : [{ name : message , value : \"hello2b\" }] - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{user.username}}\" ]","title":"Custom Template Variable Reference"},{"location":"walk-through/custom-template-variable-reference/#custom-template-variable-reference","text":"In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template. Argo will validate and resolve only the variable that starts with an Argo allowed prefix { \"item\", \"steps\", \"inputs\", \"outputs\", \"workflow\", \"tasks\" } apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : custom-template-variable- spec : entrypoint : hello-hello-hello templates : - name : hello-hello-hello steps : - - name : hello1 template : whalesay arguments : parameters : [{ name : message , value : \"hello1\" }] - - name : hello2a template : whalesay arguments : parameters : [{ name : message , value : \"hello2a\" }] - name : hello2b template : whalesay arguments : parameters : [{ name : message , value : \"hello2b\" }] - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{user.username}}\" ]","title":"Custom Template Variable Reference"},{"location":"walk-through/daemon-containers/","text":"Daemon Containers \u00b6 Argo workflows can start containers that run in the background (also known as daemon containers ) while the workflow itself continues execution. Note that the daemons will be automatically destroyed when the workflow exits the template scope in which the daemon was invoked. Daemon containers are useful for starting up services to be tested or to be used in testing (e.g., fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : daemon-step- spec : entrypoint : daemon-example templates : - name : daemon-example steps : - - name : influx template : influxdb # start an influxdb as a daemon (see the influxdb template spec below) - - name : init-database # initialize influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/query' --data-urlencode \"q=CREATE DATABASE mydb\" - - name : producer-1 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server01,region=uswest load=$i\" ; sleep .5 ; done - name : producer-2 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server02,region=uswest load=$((RANDOM % 100))\" ; sleep .5 ; done - name : producer-3 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d 'cpu,host=server03,region=useast load=15.4' - - name : consumer # consume intries from influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl --silent -G http://{{steps.influx.ip}}:8086/query?pretty=true --data-urlencode \"db=mydb\" --data-urlencode \"q=SELECT * FROM cpu\" - name : influxdb daemon : true # start influxdb as a daemon retryStrategy : limit : 10 # retry container if it fails container : image : influxdb:1.2 command : - influxd readinessProbe : # wait for readinessProbe to succeed httpGet : path : /ping port : 8086 - name : influxdb-client inputs : parameters : - name : cmd container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.cmd}}\" ] resources : requests : memory : 32Mi cpu : 100m Step templates use the steps prefix to refer to another step: for example {{steps.influx.ip}} . In DAG templates, the tasks prefix is used instead: for example {{tasks.influx.ip}} .","title":"Daemon Containers"},{"location":"walk-through/daemon-containers/#daemon-containers","text":"Argo workflows can start containers that run in the background (also known as daemon containers ) while the workflow itself continues execution. Note that the daemons will be automatically destroyed when the workflow exits the template scope in which the daemon was invoked. Daemon containers are useful for starting up services to be tested or to be used in testing (e.g., fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : daemon-step- spec : entrypoint : daemon-example templates : - name : daemon-example steps : - - name : influx template : influxdb # start an influxdb as a daemon (see the influxdb template spec below) - - name : init-database # initialize influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/query' --data-urlencode \"q=CREATE DATABASE mydb\" - - name : producer-1 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server01,region=uswest load=$i\" ; sleep .5 ; done - name : producer-2 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server02,region=uswest load=$((RANDOM % 100))\" ; sleep .5 ; done - name : producer-3 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d 'cpu,host=server03,region=useast load=15.4' - - name : consumer # consume intries from influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl --silent -G http://{{steps.influx.ip}}:8086/query?pretty=true --data-urlencode \"db=mydb\" --data-urlencode \"q=SELECT * FROM cpu\" - name : influxdb daemon : true # start influxdb as a daemon retryStrategy : limit : 10 # retry container if it fails container : image : influxdb:1.2 command : - influxd readinessProbe : # wait for readinessProbe to succeed httpGet : path : /ping port : 8086 - name : influxdb-client inputs : parameters : - name : cmd container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.cmd}}\" ] resources : requests : memory : 32Mi cpu : 100m Step templates use the steps prefix to refer to another step: for example {{steps.influx.ip}} . In DAG templates, the tasks prefix is used instead: for example {{tasks.influx.ip}} .","title":"Daemon Containers"},{"location":"walk-through/dag/","text":"DAG \u00b6 As an alternative to specifying sequences of steps , you can define a workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. DAGs can be simpler to maintain for complex workflows and allow for maximum parallelism when running tasks. In the following workflow, step A runs first, as it has no dependencies. Once A has finished, steps B and C run in parallel. Finally, once B and C have completed, step D runs. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : dag-diamond- spec : entrypoint : diamond templates : - name : echo inputs : parameters : - name : message container : image : alpine:3.7 command : [ echo , \"{{inputs.parameters.message}}\" ] - name : diamond dag : tasks : - name : A template : echo arguments : parameters : [{ name : message , value : A }] - name : B dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : B }] - name : C dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : C }] - name : D dependencies : [ B , C ] template : echo arguments : parameters : [{ name : message , value : D }] The dependency graph may have multiple roots . The templates called from a DAG or steps template can themselves be DAG or steps templates, allowing complex workflows to be split into manageable pieces. Enhanced Depends \u00b6 For more complicated, conditional dependencies, you can use the Enhanced Depends feature. Fail Fast \u00b6 By default, DAGs fail fast: when one task fails, no new tasks will be scheduled. Once all running tasks are completed, the DAG will be marked as failed. If failFast is set to false for a DAG, all branches will run to completion, regardless of failures in other branches.","title":"DAG"},{"location":"walk-through/dag/#dag","text":"As an alternative to specifying sequences of steps , you can define a workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. DAGs can be simpler to maintain for complex workflows and allow for maximum parallelism when running tasks. In the following workflow, step A runs first, as it has no dependencies. Once A has finished, steps B and C run in parallel. Finally, once B and C have completed, step D runs. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : dag-diamond- spec : entrypoint : diamond templates : - name : echo inputs : parameters : - name : message container : image : alpine:3.7 command : [ echo , \"{{inputs.parameters.message}}\" ] - name : diamond dag : tasks : - name : A template : echo arguments : parameters : [{ name : message , value : A }] - name : B dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : B }] - name : C dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : C }] - name : D dependencies : [ B , C ] template : echo arguments : parameters : [{ name : message , value : D }] The dependency graph may have multiple roots . The templates called from a DAG or steps template can themselves be DAG or steps templates, allowing complex workflows to be split into manageable pieces.","title":"DAG"},{"location":"walk-through/dag/#enhanced-depends","text":"For more complicated, conditional dependencies, you can use the Enhanced Depends feature.","title":"Enhanced Depends"},{"location":"walk-through/dag/#fail-fast","text":"By default, DAGs fail fast: when one task fails, no new tasks will be scheduled. Once all running tasks are completed, the DAG will be marked as failed. If failFast is set to false for a DAG, all branches will run to completion, regardless of failures in other branches.","title":"Fail Fast"},{"location":"walk-through/docker-in-docker-using-sidecars/","text":"Docker-in-Docker Using Sidecars \u00b6 Note: It is increasingly unlikely that the below example will work for you on your version of Kubernetes. Since Kubernetes 1.24, the dockershim has been unavailable as part of Kubernetes , rendering Docker-in-Docker unworkable. It is recommended to seek alternative methods of building containers, such as Kaniko or Buildkit . A Buildkit Workflow example is available in the examples directory of the Argo Workflows repository. An application of sidecars is to implement Docker-in-Docker (DIND). DIND is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind image to run a Docker daemon in a sidecar and give the main container access to the daemon. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-dind- spec : entrypoint : dind-sidecar-example templates : - name : dind-sidecar-example container : image : docker:19.03.13 command : [ sh , -c ] args : [ \"until docker ps; do sleep 3; done; docker run --rm debian:latest cat /etc/os-release\" ] env : - name : DOCKER_HOST # the docker daemon can be access on the standard port on localhost value : 127.0.0.1 sidecars : - name : dind image : docker:19.03.13-dind # Docker already provides an image for running a Docker daemon command : [ dockerd-entrypoint.sh ] env : - name : DOCKER_TLS_CERTDIR # Docker TLS env config value : \"\" securityContext : privileged : true # the Docker daemon can only run in a privileged container # mirrorVolumeMounts will mount the same volumes specified in the main container # to the sidecar (including artifacts), at the same mountPaths. This enables # dind daemon to (partially) see the same filesystem as the main container in # order to use features such as docker volume binding. mirrorVolumeMounts : true","title":"Docker-in-Docker Using Sidecars"},{"location":"walk-through/docker-in-docker-using-sidecars/#docker-in-docker-using-sidecars","text":"Note: It is increasingly unlikely that the below example will work for you on your version of Kubernetes. Since Kubernetes 1.24, the dockershim has been unavailable as part of Kubernetes , rendering Docker-in-Docker unworkable. It is recommended to seek alternative methods of building containers, such as Kaniko or Buildkit . A Buildkit Workflow example is available in the examples directory of the Argo Workflows repository. An application of sidecars is to implement Docker-in-Docker (DIND). DIND is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind image to run a Docker daemon in a sidecar and give the main container access to the daemon. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-dind- spec : entrypoint : dind-sidecar-example templates : - name : dind-sidecar-example container : image : docker:19.03.13 command : [ sh , -c ] args : [ \"until docker ps; do sleep 3; done; docker run --rm debian:latest cat /etc/os-release\" ] env : - name : DOCKER_HOST # the docker daemon can be access on the standard port on localhost value : 127.0.0.1 sidecars : - name : dind image : docker:19.03.13-dind # Docker already provides an image for running a Docker daemon command : [ dockerd-entrypoint.sh ] env : - name : DOCKER_TLS_CERTDIR # Docker TLS env config value : \"\" securityContext : privileged : true # the Docker daemon can only run in a privileged container # mirrorVolumeMounts will mount the same volumes specified in the main container # to the sidecar (including artifacts), at the same mountPaths. This enables # dind daemon to (partially) see the same filesystem as the main container in # order to use features such as docker volume binding. mirrorVolumeMounts : true","title":"Docker-in-Docker Using Sidecars"},{"location":"walk-through/exit-handlers/","text":"Exit handlers \u00b6 An exit handler is a template that always executes, irrespective of success or failure, at the end of the workflow. Some common use cases of exit handlers are: cleaning up after a workflow runs sending notifications of workflow status (e.g., e-mail/Slack) posting the pass/fail status to a web-hook result (e.g. GitHub build result) resubmitting or submitting another workflow apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : exit-handlers- spec : entrypoint : intentional-fail onExit : exit-handler # invoke exit-handler template at end of the workflow templates : # primary workflow template - name : intentional-fail container : image : alpine:latest command : [ sh , -c ] args : [ \"echo intentional failure; exit 1\" ] # Exit handler templates # After the completion of the entrypoint template, the status of the # workflow is made available in the global variable {{workflow.status}}. # {{workflow.status}} will be one of: Succeeded, Failed, Error - name : exit-handler steps : - - name : notify template : send-email - name : celebrate template : celebrate when : \"{{workflow.status}} == Succeeded\" - name : cry template : cry when : \"{{workflow.status}} != Succeeded\" - name : send-email container : image : alpine:latest command : [ sh , -c ] args : [ \"echo send e-mail: {{workflow.name}} {{workflow.status}} {{workflow.duration}}\" ] - name : celebrate container : image : alpine:latest command : [ sh , -c ] args : [ \"echo hooray!\" ] - name : cry container : image : alpine:latest command : [ sh , -c ] args : [ \"echo boohoo!\" ]","title":"Exit handlers"},{"location":"walk-through/exit-handlers/#exit-handlers","text":"An exit handler is a template that always executes, irrespective of success or failure, at the end of the workflow. Some common use cases of exit handlers are: cleaning up after a workflow runs sending notifications of workflow status (e.g., e-mail/Slack) posting the pass/fail status to a web-hook result (e.g. GitHub build result) resubmitting or submitting another workflow apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : exit-handlers- spec : entrypoint : intentional-fail onExit : exit-handler # invoke exit-handler template at end of the workflow templates : # primary workflow template - name : intentional-fail container : image : alpine:latest command : [ sh , -c ] args : [ \"echo intentional failure; exit 1\" ] # Exit handler templates # After the completion of the entrypoint template, the status of the # workflow is made available in the global variable {{workflow.status}}. # {{workflow.status}} will be one of: Succeeded, Failed, Error - name : exit-handler steps : - - name : notify template : send-email - name : celebrate template : celebrate when : \"{{workflow.status}} == Succeeded\" - name : cry template : cry when : \"{{workflow.status}} != Succeeded\" - name : send-email container : image : alpine:latest command : [ sh , -c ] args : [ \"echo send e-mail: {{workflow.name}} {{workflow.status}} {{workflow.duration}}\" ] - name : celebrate container : image : alpine:latest command : [ sh , -c ] args : [ \"echo hooray!\" ] - name : cry container : image : alpine:latest command : [ sh , -c ] args : [ \"echo boohoo!\" ]","title":"Exit handlers"},{"location":"walk-through/hardwired-artifacts/","text":"Hardwired Artifacts \u00b6 You can use any container image to generate any kind of artifact. In practice, however, certain types of artifacts are very common, so there is built-in support for git, HTTP, GCS, and S3 artifacts. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hardwired-artifact- spec : entrypoint : hardwired-artifact templates : - name : hardwired-artifact inputs : artifacts : # Check out the main branch of the argo repo and place it at /src # revision can be anything that git checkout accepts: branch, commit, tag, etc. - name : argo-source path : /src git : repo : https://github.com/argoproj/argo-workflows.git revision : \"main\" # Download kubectl 1.8.0 and place it at /bin/kubectl - name : kubectl path : /bin/kubectl mode : 0755 http : url : https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl # Copy an s3 compatible artifact repository bucket (such as AWS, GCS and MinIO) and place it at /s3 - name : objects path : /s3 s3 : endpoint : storage.googleapis.com bucket : my-bucket-name key : path/in/bucket accessKeySecret : name : my-s3-credentials key : accessKey secretKeySecret : name : my-s3-credentials key : secretKey container : image : debian command : [ sh , -c ] args : [ \"ls -l /src /bin/kubectl /s3\" ]","title":"Hardwired Artifacts"},{"location":"walk-through/hardwired-artifacts/#hardwired-artifacts","text":"You can use any container image to generate any kind of artifact. In practice, however, certain types of artifacts are very common, so there is built-in support for git, HTTP, GCS, and S3 artifacts. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hardwired-artifact- spec : entrypoint : hardwired-artifact templates : - name : hardwired-artifact inputs : artifacts : # Check out the main branch of the argo repo and place it at /src # revision can be anything that git checkout accepts: branch, commit, tag, etc. - name : argo-source path : /src git : repo : https://github.com/argoproj/argo-workflows.git revision : \"main\" # Download kubectl 1.8.0 and place it at /bin/kubectl - name : kubectl path : /bin/kubectl mode : 0755 http : url : https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl # Copy an s3 compatible artifact repository bucket (such as AWS, GCS and MinIO) and place it at /s3 - name : objects path : /s3 s3 : endpoint : storage.googleapis.com bucket : my-bucket-name key : path/in/bucket accessKeySecret : name : my-s3-credentials key : accessKey secretKeySecret : name : my-s3-credentials key : secretKey container : image : debian command : [ sh , -c ] args : [ \"ls -l /src /bin/kubectl /s3\" ]","title":"Hardwired Artifacts"},{"location":"walk-through/hello-world/","text":"Hello World \u00b6 Let's start by creating a very simple workflow template to echo \"hello world\" using the docker/whalesay container image from Docker Hub. You can run this directly from your shell with a simple docker command: $ docker run docker/whalesay cowsay \"hello world\" _____________ < hello world > ------------- \\ \\ \\ ## . ## ## ## == ## ## ## ## === / \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" ___/ === ~~~ { ~~ ~~~~ ~~~ ~~~~ ~~ ~ / === - ~~~ \\_ _____ o __/ \\ \\ __/ \\_ ___ \\_ _____/ Hello from Docker! This message shows that your installation appears to be working correctly. Below, we run the same container on a Kubernetes cluster using an Argo workflow template. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow # new type of k8s spec metadata : generateName : hello-world- # name of the workflow spec spec : entrypoint : whalesay # invoke the whalesay template templates : - name : whalesay # name of the template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] resources : # limit the resources limits : memory : 32Mi cpu : 100m Argo adds a new kind of Kubernetes spec called a Workflow . The above spec contains a single template called whalesay which runs the docker/whalesay container and invokes cowsay \"hello world\" . The whalesay template is the entrypoint for the spec. The entrypoint specifies the initial template that should be invoked when the workflow spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there is more than one template defined in the Kubernetes workflow spec. :-)","title":"Hello World"},{"location":"walk-through/hello-world/#hello-world","text":"Let's start by creating a very simple workflow template to echo \"hello world\" using the docker/whalesay container image from Docker Hub. You can run this directly from your shell with a simple docker command: $ docker run docker/whalesay cowsay \"hello world\" _____________ < hello world > ------------- \\ \\ \\ ## . ## ## ## == ## ## ## ## === / \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" ___/ === ~~~ { ~~ ~~~~ ~~~ ~~~~ ~~ ~ / === - ~~~ \\_ _____ o __/ \\ \\ __/ \\_ ___ \\_ _____/ Hello from Docker! This message shows that your installation appears to be working correctly. Below, we run the same container on a Kubernetes cluster using an Argo workflow template. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow # new type of k8s spec metadata : generateName : hello-world- # name of the workflow spec spec : entrypoint : whalesay # invoke the whalesay template templates : - name : whalesay # name of the template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] resources : # limit the resources limits : memory : 32Mi cpu : 100m Argo adds a new kind of Kubernetes spec called a Workflow . The above spec contains a single template called whalesay which runs the docker/whalesay container and invokes cowsay \"hello world\" . The whalesay template is the entrypoint for the spec. The entrypoint specifies the initial template that should be invoked when the workflow spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there is more than one template defined in the Kubernetes workflow spec. :-)","title":"Hello World"},{"location":"walk-through/kubernetes-resources/","text":"Kubernetes Resources \u00b6 In many cases, you will want to manage Kubernetes resources from Argo workflows. The resource template allows you to create, delete or updated any type of Kubernetes resource. # in a workflow. The resource template type accepts any k8s manifest # (including CRDs) and can perform any `kubectl` action against it (e.g. create, # apply, delete, patch). apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-jobs- spec : entrypoint : pi-tmpl templates : - name : pi-tmpl resource : # indicates that this is a resource template action : create # can be any kubectl action (e.g. create, delete, apply, patch) # The successCondition and failureCondition are optional expressions. # If failureCondition is true, the step is considered failed. # If successCondition is true, the step is considered successful. # They use kubernetes label selection syntax and can be applied against any field # of the resource (not just labels). Multiple AND conditions can be represented by comma # delimited expressions. # For more details: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ successCondition : status.succeeded > 0 failureCondition : status.failed > 3 manifest : | #put your kubernetes spec here apiVersion: batch/v1 kind: Job metadata: generateName: pi-job- spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 Note: Currently only a single resource can be managed by a resource template so either a generateName or name must be provided in the resource's meta-data. Resources created in this way are independent of the workflow. If you want the resource to be deleted when the workflow is deleted then you can use Kubernetes garbage collection with the workflow resource as an owner reference ( example ). You can also collect data about the resource in output parameters (see more at k8s-jobs.yaml ) Note: When patching, the resource will accept another attribute, mergeStrategy , which can either be strategic , merge , or json . If this attribute is not supplied, it will default to strategic . Keep in mind that Custom Resources cannot be patched with strategic , so a different strategy must be chosen. For example, suppose you have the CronTab CRD defined, and the following instance of a CronTab : apiVersion : \"stable.example.com/v1\" kind : CronTab spec : cronSpec : \"* * * * */5\" image : my-awesome-cron-image This CronTab can be modified using the following Argo Workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-patch- spec : entrypoint : cront-tmpl templates : - name : cront-tmpl resource : action : patch mergeStrategy : merge # Must be one of [strategic merge json] manifest : | apiVersion: \"stable.example.com/v1\" kind: CronTab spec: cronSpec: \"* * * * */10\" image: my-awesome-cron-image","title":"Kubernetes Resources"},{"location":"walk-through/kubernetes-resources/#kubernetes-resources","text":"In many cases, you will want to manage Kubernetes resources from Argo workflows. The resource template allows you to create, delete or updated any type of Kubernetes resource. # in a workflow. The resource template type accepts any k8s manifest # (including CRDs) and can perform any `kubectl` action against it (e.g. create, # apply, delete, patch). apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-jobs- spec : entrypoint : pi-tmpl templates : - name : pi-tmpl resource : # indicates that this is a resource template action : create # can be any kubectl action (e.g. create, delete, apply, patch) # The successCondition and failureCondition are optional expressions. # If failureCondition is true, the step is considered failed. # If successCondition is true, the step is considered successful. # They use kubernetes label selection syntax and can be applied against any field # of the resource (not just labels). Multiple AND conditions can be represented by comma # delimited expressions. # For more details: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ successCondition : status.succeeded > 0 failureCondition : status.failed > 3 manifest : | #put your kubernetes spec here apiVersion: batch/v1 kind: Job metadata: generateName: pi-job- spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 Note: Currently only a single resource can be managed by a resource template so either a generateName or name must be provided in the resource's meta-data. Resources created in this way are independent of the workflow. If you want the resource to be deleted when the workflow is deleted then you can use Kubernetes garbage collection with the workflow resource as an owner reference ( example ). You can also collect data about the resource in output parameters (see more at k8s-jobs.yaml ) Note: When patching, the resource will accept another attribute, mergeStrategy , which can either be strategic , merge , or json . If this attribute is not supplied, it will default to strategic . Keep in mind that Custom Resources cannot be patched with strategic , so a different strategy must be chosen. For example, suppose you have the CronTab CRD defined, and the following instance of a CronTab : apiVersion : \"stable.example.com/v1\" kind : CronTab spec : cronSpec : \"* * * * */5\" image : my-awesome-cron-image This CronTab can be modified using the following Argo Workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-patch- spec : entrypoint : cront-tmpl templates : - name : cront-tmpl resource : action : patch mergeStrategy : merge # Must be one of [strategic merge json] manifest : | apiVersion: \"stable.example.com/v1\" kind: CronTab spec: cronSpec: \"* * * * */10\" image: my-awesome-cron-image","title":"Kubernetes Resources"},{"location":"walk-through/loops/","text":"Loops \u00b6 When writing workflows, it is often very useful to be able to iterate over a set of inputs, as this is how argo-workflows can perform loops. There are two basic ways of running a template multiple times. withItems takes a list of things to work on. Either plain, single values, which are then usable in your template as '{{item}}' a JSON object where each element in the object can be addressed by it's key as '{{item.key}}' withParam takes a JSON array of items, and iterates over it - again the items can be objects like with withItems . This is very powerful, as you can generate the JSON in another step in your workflow, so creating a dynamic workflow. withItems basic example \u00b6 This example is the simplest. We are taking a basic list of items and iterating over it with withItems . It is limited to one varying field for each of the workflow templates instantiated. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops- spec : entrypoint : loop-example templates : - name : loop-example steps : - - name : print-message template : whalesay arguments : parameters : - name : message value : \"{{item}}\" withItems : # invoke whalesay once for each item in parallel - hello world # item 1 - goodbye world # item 2 - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] withItems more complex example \u00b6 If we'd like to pass more than one piece of information in each workflow, you can instead use a JSON object for each entry in withItems and then address the elements by key, as shown in this example. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops-maps- spec : entrypoint : loop-map-example templates : - name : loop-map-example # parameter specifies the list to iterate over steps : - - name : test-linux template : cat-os-release arguments : parameters : - name : image value : \"{{item.image}}\" - name : tag value : \"{{item.tag}}\" withItems : - { image : 'debian' , tag : '9.1' } #item set 1 - { image : 'debian' , tag : '8.9' } #item set 2 - { image : 'alpine' , tag : '3.6' } #item set 3 - { image : 'ubuntu' , tag : '17.10' } #item set 4 - name : cat-os-release inputs : parameters : - name : image - name : tag container : image : \"{{inputs.parameters.image}}:{{inputs.parameters.tag}}\" command : [ cat ] args : [ /etc/os-release ] withParam example \u00b6 This example does exactly the same job as the previous example, but using withParam to pass the information as a JSON array argument, instead of hard-coding it into the template. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops-param-arg- spec : entrypoint : loop-param-arg-example arguments : parameters : - name : os-list # a list of items value : | [ { \"image\": \"debian\", \"tag\": \"9.1\" }, { \"image\": \"debian\", \"tag\": \"8.9\" }, { \"image\": \"alpine\", \"tag\": \"3.6\" }, { \"image\": \"ubuntu\", \"tag\": \"17.10\" } ] templates : - name : loop-param-arg-example inputs : parameters : - name : os-list steps : - - name : test-linux template : cat-os-release arguments : parameters : - name : image value : \"{{item.image}}\" - name : tag value : \"{{item.tag}}\" withParam : \"{{inputs.parameters.os-list}}\" # parameter specifies the list to iterate over # This template is the same as in the previous example - name : cat-os-release inputs : parameters : - name : image - name : tag container : image : \"{{inputs.parameters.image}}:{{inputs.parameters.tag}}\" command : [ cat ] args : [ /etc/os-release ] withParam example from another step in the workflow \u00b6 Finally, the most powerful form of this is to generate that JSON array of objects dynamically in one step, and then pass it to the next step so that the number and values used in the second step are only calculated at runtime. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops-param-result- spec : entrypoint : loop-param-result-example templates : - name : loop-param-result-example steps : - - name : generate template : gen-number-list # Iterate over the list of numbers generated by the generate step above - - name : sleep template : sleep-n-sec arguments : parameters : - name : seconds value : \"{{item}}\" withParam : \"{{steps.generate.outputs.result}}\" # Generate a list of numbers in JSON format - name : gen-number-list script : image : python:alpine3.6 command : [ python ] source : | import json import sys json.dump([i for i in range(20, 31)], sys.stdout) - name : sleep-n-sec inputs : parameters : - name : seconds container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done\" ] Accessing the aggregate results of a loop \u00b6 The output of all iterations can be accessed as a JSON array, once the loop is done. The example below shows how you can read it. Please note: the output of each iteration must be a valid JSON . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : loop-test spec : entrypoint : main templates : - name : main steps : - - name : execute-parallel-steps template : print-json-entry arguments : parameters : - name : index value : '{{item}}' withParam : '[1, 2, 3]' - - name : call-access-aggregate-output template : access-aggregate-output arguments : parameters : - name : aggregate-results # If the value of each loop iteration isn't a valid JSON, # you get a JSON parse error: value : '{{steps.execute-parallel-steps.outputs.result}}' - name : print-json-entry inputs : parameters : - name : index # The output must be a valid JSON script : image : alpine:latest command : [ sh ] source : | cat < /tmp/hello_world.txt\" ] # generate the content of hello_world.txt outputs : parameters : - name : hello-param # name of output parameter valueFrom : path : /tmp/hello_world.txt # set the value of hello-param to the contents of this hello-world.txt - name : print-message inputs : parameters : - name : message container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-parameter.outputs.parameters.hello-param}} . result output parameter \u00b6 The result output parameter captures standard output. It is accessible from the outputs map: outputs.result . Only 256 kb of the standard output stream will be captured. Scripts \u00b6 Outputs of a script are assigned to standard output and captured in the result parameter. More details here . Containers \u00b6 Container steps and tasks also have their standard output captured in the result parameter. Given a task , called log-int , result would then be accessible as {{ tasks.log-int.outputs.result }} . If using steps , substitute tasks for steps : {{ steps.log-int.outputs.result }} .","title":"Output Parameters"},{"location":"walk-through/output-parameters/#output-parameters","text":"Output parameters provide a general mechanism to use the result of a step as a parameter (and not just as an artifact). This allows you to use the result from any type of step, not just a script , for conditional tests, loops, and arguments. Output parameters work similarly to script result except that the value of the output parameter is set to the contents of a generated file rather than the contents of stdout . apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : output-parameter- spec : entrypoint : output-parameter templates : - name : output-parameter steps : - - name : generate-parameter template : whalesay - - name : consume-parameter template : print-message arguments : parameters : # Pass the hello-param output from the generate-parameter step as the message input to print-message - name : message value : \"{{steps.generate-parameter.outputs.parameters.hello-param}}\" - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo -n hello world > /tmp/hello_world.txt\" ] # generate the content of hello_world.txt outputs : parameters : - name : hello-param # name of output parameter valueFrom : path : /tmp/hello_world.txt # set the value of hello-param to the contents of this hello-world.txt - name : print-message inputs : parameters : - name : message container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-parameter.outputs.parameters.hello-param}} .","title":"Output Parameters"},{"location":"walk-through/output-parameters/#result-output-parameter","text":"The result output parameter captures standard output. It is accessible from the outputs map: outputs.result . Only 256 kb of the standard output stream will be captured.","title":"result output parameter"},{"location":"walk-through/output-parameters/#scripts","text":"Outputs of a script are assigned to standard output and captured in the result parameter. More details here .","title":"Scripts"},{"location":"walk-through/output-parameters/#containers","text":"Container steps and tasks also have their standard output captured in the result parameter. Given a task , called log-int , result would then be accessible as {{ tasks.log-int.outputs.result }} . If using steps , substitute tasks for steps : {{ steps.log-int.outputs.result }} .","title":"Containers"},{"location":"walk-through/parameters/","text":"Parameters \u00b6 Let's look at a slightly more complex workflow spec with parameters. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : # invoke the whalesay template with # \"hello world\" as the argument # to the message parameter entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message # parameter declaration container : # run cowsay with that message input parameter as args image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] This time, the whalesay template takes an input parameter named message that is passed as the args to the cowsay command. In order to reference parameters (e.g., \"{{inputs.parameters.message}}\" ), the parameters must be enclosed in double quotes to escape the curly braces in YAML. The argo CLI provides a convenient way to override parameters used to invoke the entrypoint. For example, the following command would bind the message parameter to \"goodbye world\" instead of the default \"hello world\". argo submit arguments-parameters.yaml -p message = \"goodbye world\" In case of multiple parameters that can be overridden, the argo CLI provides a command to load parameters files in YAML or JSON format. Here is an example of that kind of parameter file: message : goodbye world To run use following command: argo submit arguments-parameters.yaml --parameter-file params.yaml Command-line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the whalesay template called whalesay-caps but you don't want to change the default entrypoint, you can invoke this from the command line as follows: argo submit arguments-parameters.yaml --entrypoint whalesay-caps By using a combination of the --entrypoint and -p parameters, you can call any template in the workflow spec with any parameter that you like. The values set in the spec.arguments.parameters are globally scoped and can be accessed via {{workflow.parameters.parameter_name}} . This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels that are set in the environment of each container, you could have a YAML file similar to this one: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : global-parameters- spec : entrypoint : A arguments : parameters : - name : log-level value : INFO templates : - name : A container : image : containerA env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runA ] - name : B container : image : containerB env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runB ] In this workflow, both steps A and B would have the same log-level set to INFO and can easily be changed between workflow submissions using the -p flag.","title":"Parameters"},{"location":"walk-through/parameters/#parameters","text":"Let's look at a slightly more complex workflow spec with parameters. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : # invoke the whalesay template with # \"hello world\" as the argument # to the message parameter entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message # parameter declaration container : # run cowsay with that message input parameter as args image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] This time, the whalesay template takes an input parameter named message that is passed as the args to the cowsay command. In order to reference parameters (e.g., \"{{inputs.parameters.message}}\" ), the parameters must be enclosed in double quotes to escape the curly braces in YAML. The argo CLI provides a convenient way to override parameters used to invoke the entrypoint. For example, the following command would bind the message parameter to \"goodbye world\" instead of the default \"hello world\". argo submit arguments-parameters.yaml -p message = \"goodbye world\" In case of multiple parameters that can be overridden, the argo CLI provides a command to load parameters files in YAML or JSON format. Here is an example of that kind of parameter file: message : goodbye world To run use following command: argo submit arguments-parameters.yaml --parameter-file params.yaml Command-line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the whalesay template called whalesay-caps but you don't want to change the default entrypoint, you can invoke this from the command line as follows: argo submit arguments-parameters.yaml --entrypoint whalesay-caps By using a combination of the --entrypoint and -p parameters, you can call any template in the workflow spec with any parameter that you like. The values set in the spec.arguments.parameters are globally scoped and can be accessed via {{workflow.parameters.parameter_name}} . This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels that are set in the environment of each container, you could have a YAML file similar to this one: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : global-parameters- spec : entrypoint : A arguments : parameters : - name : log-level value : INFO templates : - name : A container : image : containerA env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runA ] - name : B container : image : containerB env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runB ] In this workflow, both steps A and B would have the same log-level set to INFO and can easily be changed between workflow submissions using the -p flag.","title":"Parameters"},{"location":"walk-through/recursion/","text":"Recursion \u00b6 Templates can recursively invoke each other! In this variation of the above coin-flip template, we continue to flip coins until it comes up heads. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip-recursive- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails # keep flipping coins if \"tails\" template : coinflip when : \"{{steps.flip-coin.outputs.result}} == tails\" - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] Here's the result of a couple of runs of coin-flip for comparison. argo get coinflip-recursive-tzcb5 STEP PODNAME MESSAGE \u2714 coinflip-recursive-vhph5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-vhph5-2123890397 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-vhph5-128690560 \u2514\u2500\u25cb tails STEP PODNAME MESSAGE \u2714 coinflip-recursive-tzcb5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-322836820 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1863890320 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1768147140 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-4080411136 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-tzcb5-4080323273 \u2514\u2500\u25cb tails In the first run, the coin immediately comes up heads and we stop. In the second run, the coin comes up tail three times before it finally comes up heads and we stop.","title":"Recursion"},{"location":"walk-through/recursion/#recursion","text":"Templates can recursively invoke each other! In this variation of the above coin-flip template, we continue to flip coins until it comes up heads. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip-recursive- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails # keep flipping coins if \"tails\" template : coinflip when : \"{{steps.flip-coin.outputs.result}} == tails\" - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] Here's the result of a couple of runs of coin-flip for comparison. argo get coinflip-recursive-tzcb5 STEP PODNAME MESSAGE \u2714 coinflip-recursive-vhph5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-vhph5-2123890397 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-vhph5-128690560 \u2514\u2500\u25cb tails STEP PODNAME MESSAGE \u2714 coinflip-recursive-tzcb5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-322836820 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1863890320 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1768147140 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-4080411136 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-tzcb5-4080323273 \u2514\u2500\u25cb tails In the first run, the coin immediately comes up heads and we stop. In the second run, the coin comes up tail three times before it finally comes up heads and we stop.","title":"Recursion"},{"location":"walk-through/retrying-failed-or-errored-steps/","text":"Retrying Failed or Errored Steps \u00b6 You can specify a retryStrategy that will dictate how failed or errored steps are retried: # This example demonstrates the use of retry back offs apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-backoff- spec : entrypoint : retry-backoff templates : - name : retry-backoff retryStrategy : limit : 10 retryPolicy : \"Always\" backoff : duration : \"1\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" factor : 2 maxDuration : \"1m\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" affinity : nodeAntiAffinity : {} container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] limit is the maximum number of times the container will be retried. retryPolicy specifies if a container will be retried on failure, error, both, or only transient errors (e.g. i/o or TLS handshake timeout). \"Always\" retries on both errors and failures. Also available: OnFailure (default), \" OnError \", and \" OnTransientError \" (available after v3.0.0-rc2). backoff is an exponential back-off nodeAntiAffinity prevents running steps on the same host. Current implementation allows only empty nodeAntiAffinity (i.e. nodeAntiAffinity: {} ) and by default it uses label kubernetes.io/hostname as the selector. Providing an empty retryStrategy (i.e. retryStrategy: {} ) will cause a container to retry until completion.","title":"Retrying Failed or Errored Steps"},{"location":"walk-through/retrying-failed-or-errored-steps/#retrying-failed-or-errored-steps","text":"You can specify a retryStrategy that will dictate how failed or errored steps are retried: # This example demonstrates the use of retry back offs apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-backoff- spec : entrypoint : retry-backoff templates : - name : retry-backoff retryStrategy : limit : 10 retryPolicy : \"Always\" backoff : duration : \"1\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" factor : 2 maxDuration : \"1m\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" affinity : nodeAntiAffinity : {} container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] limit is the maximum number of times the container will be retried. retryPolicy specifies if a container will be retried on failure, error, both, or only transient errors (e.g. i/o or TLS handshake timeout). \"Always\" retries on both errors and failures. Also available: OnFailure (default), \" OnError \", and \" OnTransientError \" (available after v3.0.0-rc2). backoff is an exponential back-off nodeAntiAffinity prevents running steps on the same host. Current implementation allows only empty nodeAntiAffinity (i.e. nodeAntiAffinity: {} ) and by default it uses label kubernetes.io/hostname as the selector. Providing an empty retryStrategy (i.e. retryStrategy: {} ) will cause a container to retry until completion.","title":"Retrying Failed or Errored Steps"},{"location":"walk-through/scripts-and-results/","text":"Scripts And Results \u00b6 Often, we just want a template that executes a script specified as a here-script (also known as a here document ) in the workflow spec. This example shows how to do that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : scripts-bash- spec : entrypoint : bash-script-example templates : - name : bash-script-example steps : - - name : generate template : gen-random-int-bash - - name : print template : print-message arguments : parameters : - name : message value : \"{{steps.generate.outputs.result}}\" # The result of the here-script - name : gen-random-int-bash script : image : debian:9.4 command : [ bash ] source : | # Contents of the here-script cat /dev/urandom | od -N2 -An -i | awk -v f=1 -v r=100 '{printf \"%i\\n\", f + r * $1 / 65536}' - name : gen-random-int-python script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i) - name : gen-random-int-javascript script : image : node:9.1-alpine command : [ node ] source : | var rand = Math.floor(Math.random() * 100); console.log(rand); - name : print-message inputs : parameters : - name : message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo result was: {{inputs.parameters.message}}\" ] The script keyword allows the specification of the script body using the source tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to command , which should be an interpreter that executes the script body. The use of the script feature also assigns the standard output of running the script to a special output parameter named result . This allows you to use the result of running the script itself in the rest of the workflow spec. In this example, the result is simply echoed by the print-message template.","title":"Scripts And Results"},{"location":"walk-through/scripts-and-results/#scripts-and-results","text":"Often, we just want a template that executes a script specified as a here-script (also known as a here document ) in the workflow spec. This example shows how to do that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : scripts-bash- spec : entrypoint : bash-script-example templates : - name : bash-script-example steps : - - name : generate template : gen-random-int-bash - - name : print template : print-message arguments : parameters : - name : message value : \"{{steps.generate.outputs.result}}\" # The result of the here-script - name : gen-random-int-bash script : image : debian:9.4 command : [ bash ] source : | # Contents of the here-script cat /dev/urandom | od -N2 -An -i | awk -v f=1 -v r=100 '{printf \"%i\\n\", f + r * $1 / 65536}' - name : gen-random-int-python script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i) - name : gen-random-int-javascript script : image : node:9.1-alpine command : [ node ] source : | var rand = Math.floor(Math.random() * 100); console.log(rand); - name : print-message inputs : parameters : - name : message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo result was: {{inputs.parameters.message}}\" ] The script keyword allows the specification of the script body using the source tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to command , which should be an interpreter that executes the script body. The use of the script feature also assigns the standard output of running the script to a special output parameter named result . This allows you to use the result of running the script itself in the rest of the workflow spec. In this example, the result is simply echoed by the print-message template.","title":"Scripts And Results"},{"location":"walk-through/secrets/","text":"Secrets \u00b6 Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. See the Kubernetes documentation for more information. # To run this example, first create the secret by running: # kubectl create secret generic my-secret --from-literal=mypassword=S00perS3cretPa55word apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : secret-example- spec : entrypoint : whalesay # To access secrets as files, add a volume entry in spec.volumes[] and # then in the container template spec, add a mount using volumeMounts. volumes : - name : my-secret-vol secret : secretName : my-secret # name of an existing k8s secret templates : - name : whalesay container : image : alpine:3.7 command : [ sh , -c ] args : [ ' echo \"secret from env: $MYSECRETPASSWORD\"; echo \"secret from file: `cat /secret/mountpath/mypassword`\" ' ] # To access secrets as environment variables, use the k8s valueFrom and # secretKeyRef constructs. env : - name : MYSECRETPASSWORD # name of env var valueFrom : secretKeyRef : name : my-secret # name of an existing k8s secret key : mypassword # 'key' subcomponent of the secret volumeMounts : - name : my-secret-vol # mount file containing secret at /secret/mountpath mountPath : \"/secret/mountpath\"","title":"Secrets"},{"location":"walk-through/secrets/#secrets","text":"Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. See the Kubernetes documentation for more information. # To run this example, first create the secret by running: # kubectl create secret generic my-secret --from-literal=mypassword=S00perS3cretPa55word apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : secret-example- spec : entrypoint : whalesay # To access secrets as files, add a volume entry in spec.volumes[] and # then in the container template spec, add a mount using volumeMounts. volumes : - name : my-secret-vol secret : secretName : my-secret # name of an existing k8s secret templates : - name : whalesay container : image : alpine:3.7 command : [ sh , -c ] args : [ ' echo \"secret from env: $MYSECRETPASSWORD\"; echo \"secret from file: `cat /secret/mountpath/mypassword`\" ' ] # To access secrets as environment variables, use the k8s valueFrom and # secretKeyRef constructs. env : - name : MYSECRETPASSWORD # name of env var valueFrom : secretKeyRef : name : my-secret # name of an existing k8s secret key : mypassword # 'key' subcomponent of the secret volumeMounts : - name : my-secret-vol # mount file containing secret at /secret/mountpath mountPath : \"/secret/mountpath\"","title":"Secrets"},{"location":"walk-through/sidecars/","text":"Sidecars \u00b6 A sidecar is another container that executes concurrently in the same pod as the main container and is useful in creating multi-container pods. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-nginx- spec : entrypoint : sidecar-nginx-example templates : - name : sidecar-nginx-example container : image : appropriate/curl command : [ sh , -c ] # Try to read from nginx web server until it comes up args : [ \"until `curl -G 'http://127.0.0.1/' >& /tmp/out`; do echo sleep && sleep 1; done && cat /tmp/out\" ] # Create a simple nginx web server sidecars : - name : nginx image : nginx:1.13 command : [ nginx , -g , daemon off; ] In the above example, we create a sidecar container that runs Nginx as a simple web server. The order in which containers come up is random, so in this example the main container polls the Nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems: always wait for any services you need to come up before running your main code.","title":"Sidecars"},{"location":"walk-through/sidecars/#sidecars","text":"A sidecar is another container that executes concurrently in the same pod as the main container and is useful in creating multi-container pods. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-nginx- spec : entrypoint : sidecar-nginx-example templates : - name : sidecar-nginx-example container : image : appropriate/curl command : [ sh , -c ] # Try to read from nginx web server until it comes up args : [ \"until `curl -G 'http://127.0.0.1/' >& /tmp/out`; do echo sleep && sleep 1; done && cat /tmp/out\" ] # Create a simple nginx web server sidecars : - name : nginx image : nginx:1.13 command : [ nginx , -g , daemon off; ] In the above example, we create a sidecar container that runs Nginx as a simple web server. The order in which containers come up is random, so in this example the main container polls the Nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems: always wait for any services you need to come up before running your main code.","title":"Sidecars"},{"location":"walk-through/steps/","text":"Steps \u00b6 In this example, we'll see how to create multi-step workflows, how to define more than one template in a workflow spec, and how to create nested workflows. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello-hello-hello # This spec contains two templates: hello-hello-hello and whalesay templates : - name : hello-hello-hello # Instead of just running a container # This template has a sequence of steps steps : - - name : hello1 # hello1 is run before the following steps template : whalesay arguments : parameters : - name : message value : \"hello1\" - - name : hello2a # double dash => run after previous step template : whalesay arguments : parameters : - name : message value : \"hello2a\" - name : hello2b # single dash => run in parallel with previous step template : whalesay arguments : parameters : - name : message value : \"hello2b\" # This is the same template as from the previous example - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The above workflow spec prints three different flavors of \"hello\". The hello-hello-hello template consists of three steps . The first step named hello1 will be run in sequence whereas the next two steps named hello2a and hello2b will be run in parallel with each other. Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named hello2a and hello2b ran in parallel with each other. STEP TEMPLATE PODNAME DURATION MESSAGE \u2714 steps-z2zdn hello-hello-hello \u251c\u2500\u2500\u2500\u2714 hello1 whalesay steps-z2zdn-27420706 2s \u2514\u2500\u252c\u2500\u2714 hello2a whalesay steps-z2zdn-2006760091 3s \u2514\u2500\u2714 hello2b whalesay steps-z2zdn-2023537710 3s","title":"Steps"},{"location":"walk-through/steps/#steps","text":"In this example, we'll see how to create multi-step workflows, how to define more than one template in a workflow spec, and how to create nested workflows. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello-hello-hello # This spec contains two templates: hello-hello-hello and whalesay templates : - name : hello-hello-hello # Instead of just running a container # This template has a sequence of steps steps : - - name : hello1 # hello1 is run before the following steps template : whalesay arguments : parameters : - name : message value : \"hello1\" - - name : hello2a # double dash => run after previous step template : whalesay arguments : parameters : - name : message value : \"hello2a\" - name : hello2b # single dash => run in parallel with previous step template : whalesay arguments : parameters : - name : message value : \"hello2b\" # This is the same template as from the previous example - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The above workflow spec prints three different flavors of \"hello\". The hello-hello-hello template consists of three steps . The first step named hello1 will be run in sequence whereas the next two steps named hello2a and hello2b will be run in parallel with each other. Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named hello2a and hello2b ran in parallel with each other. STEP TEMPLATE PODNAME DURATION MESSAGE \u2714 steps-z2zdn hello-hello-hello \u251c\u2500\u2500\u2500\u2714 hello1 whalesay steps-z2zdn-27420706 2s \u2514\u2500\u252c\u2500\u2714 hello2a whalesay steps-z2zdn-2006760091 3s \u2514\u2500\u2714 hello2b whalesay steps-z2zdn-2023537710 3s","title":"Steps"},{"location":"walk-through/suspending/","text":"Suspending \u00b6 Workflows can be suspended by argo suspend WORKFLOW Or by specifying a suspend step on the workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : suspend-template- spec : entrypoint : suspend templates : - name : suspend steps : - - name : build template : whalesay - - name : approve template : approve - - name : delay template : delay - - name : release template : whalesay - name : approve suspend : {} - name : delay suspend : duration : \"20\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] Once suspended, a Workflow will not schedule any new steps until it is resumed. It can be resumed manually by argo resume WORKFLOW Or automatically with a duration limit as the example above.","title":"Suspending"},{"location":"walk-through/suspending/#suspending","text":"Workflows can be suspended by argo suspend WORKFLOW Or by specifying a suspend step on the workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : suspend-template- spec : entrypoint : suspend templates : - name : suspend steps : - - name : build template : whalesay - - name : approve template : approve - - name : delay template : delay - - name : release template : whalesay - name : approve suspend : {} - name : delay suspend : duration : \"20\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] Once suspended, a Workflow will not schedule any new steps until it is resumed. It can be resumed manually by argo resume WORKFLOW Or automatically with a duration limit as the example above.","title":"Suspending"},{"location":"walk-through/the-structure-of-workflow-specs/","text":"The Structure of Workflow Specs \u00b6 We now know enough about the basic components of a workflow spec. To review its basic structure: Kubernetes header including meta-data Spec body Entrypoint invocation with optional arguments List of template definitions For each template definition Name of the template Optionally a list of inputs Optionally a list of outputs Container invocation (leaf template) or a list of steps For each step, a template invocation To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template. Note that the container section of the workflow spec will accept the same options as the container section of a pod spec, including but not limited to environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes.","title":"The Structure of Workflow Specs"},{"location":"walk-through/the-structure-of-workflow-specs/#the-structure-of-workflow-specs","text":"We now know enough about the basic components of a workflow spec. To review its basic structure: Kubernetes header including meta-data Spec body Entrypoint invocation with optional arguments List of template definitions For each template definition Name of the template Optionally a list of inputs Optionally a list of outputs Container invocation (leaf template) or a list of steps For each step, a template invocation To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template. Note that the container section of the workflow spec will accept the same options as the container section of a pod spec, including but not limited to environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes.","title":"The Structure of Workflow Specs"},{"location":"walk-through/timeouts/","text":"Timeouts \u00b6 You can use the field activeDeadlineSeconds to limit the elapsed time for a workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : activeDeadlineSeconds : 10 # terminate workflow after 10 seconds entrypoint : sleep templates : - name : sleep container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ] You can limit the elapsed time for a specific template as well: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : entrypoint : sleep templates : - name : sleep activeDeadlineSeconds : 10 # terminate container template after 10 seconds container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ]","title":"Timeouts"},{"location":"walk-through/timeouts/#timeouts","text":"You can use the field activeDeadlineSeconds to limit the elapsed time for a workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : activeDeadlineSeconds : 10 # terminate workflow after 10 seconds entrypoint : sleep templates : - name : sleep container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ] You can limit the elapsed time for a specific template as well: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : entrypoint : sleep templates : - name : sleep activeDeadlineSeconds : 10 # terminate container template after 10 seconds container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ]","title":"Timeouts"},{"location":"walk-through/volumes/","text":"Volumes \u00b6 The following example dynamically creates a volume and then uses the volume in a two step workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-pvc- spec : entrypoint : volumes-pvc-example volumeClaimTemplates : # define volume, same syntax as k8s Pod spec - metadata : name : workdir # name of volume claim spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi # Gi => 1024 * 1024 * 1024 templates : - name : volumes-pvc-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol Volumes are a very useful way to move large amounts of data from one step in a workflow to another. Depending on the system, some volumes may be accessible concurrently from multiple steps. In some cases, you want to access an already existing volume rather than creating/destroying one dynamically. # Define Kubernetes PVC kind : PersistentVolumeClaim apiVersion : v1 metadata : name : my-existing-volume spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-existing- spec : entrypoint : volumes-existing-example volumes : # Pass my-existing-volume as an argument to the volumes-existing-example template # Same syntax as k8s Pod spec - name : workdir persistentVolumeClaim : claimName : my-existing-volume templates : - name : volumes-existing-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol It's also possible to declare existing volumes at the template level, instead of the workflow level. Workflows can generate volumes using a resource step. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : template-level-volume- spec : entrypoint : generate-and-use-volume templates : - name : generate-and-use-volume steps : - - name : generate-volume template : generate-volume arguments : parameters : - name : pvc-size # In a real-world example, this could be generated by a previous workflow step. value : '1Gi' - - name : generate template : whalesay arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - - name : print template : print-message arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - name : generate-volume inputs : parameters : - name : pvc-size resource : action : create setOwnerReference : true manifest : | apiVersion: v1 kind: PersistentVolumeClaim metadata: generateName: pvc-example- spec: accessModes: ['ReadWriteOnce', 'ReadOnlyMany'] resources: requests: storage: '{{inputs.parameters.pvc-size}}' outputs : parameters : - name : pvc-name valueFrom : jsonPath : '{.metadata.name}' - name : whalesay inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol","title":"Volumes"},{"location":"walk-through/volumes/#volumes","text":"The following example dynamically creates a volume and then uses the volume in a two step workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-pvc- spec : entrypoint : volumes-pvc-example volumeClaimTemplates : # define volume, same syntax as k8s Pod spec - metadata : name : workdir # name of volume claim spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi # Gi => 1024 * 1024 * 1024 templates : - name : volumes-pvc-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol Volumes are a very useful way to move large amounts of data from one step in a workflow to another. Depending on the system, some volumes may be accessible concurrently from multiple steps. In some cases, you want to access an already existing volume rather than creating/destroying one dynamically. # Define Kubernetes PVC kind : PersistentVolumeClaim apiVersion : v1 metadata : name : my-existing-volume spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-existing- spec : entrypoint : volumes-existing-example volumes : # Pass my-existing-volume as an argument to the volumes-existing-example template # Same syntax as k8s Pod spec - name : workdir persistentVolumeClaim : claimName : my-existing-volume templates : - name : volumes-existing-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol It's also possible to declare existing volumes at the template level, instead of the workflow level. Workflows can generate volumes using a resource step. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : template-level-volume- spec : entrypoint : generate-and-use-volume templates : - name : generate-and-use-volume steps : - - name : generate-volume template : generate-volume arguments : parameters : - name : pvc-size # In a real-world example, this could be generated by a previous workflow step. value : '1Gi' - - name : generate template : whalesay arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - - name : print template : print-message arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - name : generate-volume inputs : parameters : - name : pvc-size resource : action : create setOwnerReference : true manifest : | apiVersion: v1 kind: PersistentVolumeClaim metadata: generateName: pvc-example- spec: accessModes: ['ReadWriteOnce', 'ReadOnlyMany'] resources: requests: storage: '{{inputs.parameters.pvc-size}}' outputs : parameters : - name : pvc-name valueFrom : jsonPath : '{.metadata.name}' - name : whalesay inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol","title":"Volumes"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Argo Workflows \u00b6 What is Argo Workflows? \u00b6 Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). Define workflows where each step in the workflow is a container. Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG). Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes. Argo is a Cloud Native Computing Foundation (CNCF) graduated project. Use Cases \u00b6 Machine Learning pipelines Data and batch processing Infrastructure automation CI/CD Other use cases Why Argo Workflows? \u00b6 Argo Workflows is the most popular workflow execution engine for Kubernetes. Light-weight, scalable, and easier to use. Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments. Cloud agnostic and can run on any Kubernetes cluster. Read what people said in our latest survey Try Argo Workflows \u00b6 Access the demo environment (login using Github) Who uses Argo Workflows? \u00b6 About 200+ organizations are officially using Argo Workflows Ecosystem \u00b6 Just some of the projects that use or rely on Argo Workflows (complete list here ): Argo Events Couler Hera Katib Kedro Kubeflow Pipelines Netflix Metaflow Onepanel Orchest Ploomber Seldon SQLFlow Client Libraries \u00b6 Check out our Java, Golang and Python clients . Quickstart \u00b6 Get started here Walk-through examples Documentation \u00b6 View the docs Features \u00b6 An incomplete list of features Argo Workflows provide: UI to visualize and manage Workflows Artifact support (S3, Artifactory, Alibaba Cloud OSS, Azure Blob Storage, HTTP, Git, GCS, raw) Workflow templating to store commonly used Workflows in the cluster Archiving Workflows after executing for later access Scheduled workflows using cron Server interface with REST API (HTTP and GRPC) DAG or Steps based declaration of workflows Step level input & outputs (artifacts/parameters) Loops Parameterization Conditionals Timeouts (step & workflow level) Retry (step & workflow level) Resubmit (memoized) Suspend & Resume Cancellation K8s resource orchestration Exit Hooks (notifications, cleanup) Garbage collection of completed workflow Scheduling (affinity/tolerations/node selectors) Volumes (ephemeral/existing) Parallelism limits Daemoned steps DinD (docker-in-docker) Script steps Event emission Prometheus metrics Multiple executors Multiple pod and workflow garbage collection strategies Automatically calculated resource usage per step Java/Golang/Python SDKs Pod Disruption Budget support Single-sign on (OAuth2/OIDC) Webhook triggering CLI Out-of-the box and custom Prometheus metrics Windows container support Embedded widgets Multiplex log viewer Community Meetings \u00b6 We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here . Participation in the Argo Workflows project is governed by the CNCF Code of Conduct Community Blogs and Presentations \u00b6 Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows Argo Ansible role: Provisioning Argo Workflows on OpenShift Argo Workflows vs Apache Airflow CI/CD with Argo on Kubernetes Distributed Machine Learning Patterns from Manning Publication Running Argo Workflows Across Multiple Kubernetes Clusters Open Source Model Management Roundup: Polyaxon, Argo, and Seldon Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow Argo integration review TGI Kubernetes with Joe Beda: Argo workflow system Project Resources \u00b6 Argo Project GitHub organization Argo Website Argo Slack Security \u00b6 See SECURITY.md .","title":"Home"},{"location":"#argo-workflows","text":"","title":"Argo Workflows"},{"location":"#what-is-argo-workflows","text":"Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). Define workflows where each step in the workflow is a container. Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG). Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes. Argo is a Cloud Native Computing Foundation (CNCF) graduated project.","title":"What is Argo Workflows?"},{"location":"#use-cases","text":"Machine Learning pipelines Data and batch processing Infrastructure automation CI/CD Other use cases","title":"Use Cases"},{"location":"#why-argo-workflows","text":"Argo Workflows is the most popular workflow execution engine for Kubernetes. Light-weight, scalable, and easier to use. Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments. Cloud agnostic and can run on any Kubernetes cluster. Read what people said in our latest survey","title":"Why Argo Workflows?"},{"location":"#try-argo-workflows","text":"Access the demo environment (login using Github)","title":"Try Argo Workflows"},{"location":"#who-uses-argo-workflows","text":"About 200+ organizations are officially using Argo Workflows","title":"Who uses Argo Workflows?"},{"location":"#ecosystem","text":"Just some of the projects that use or rely on Argo Workflows (complete list here ): Argo Events Couler Hera Katib Kedro Kubeflow Pipelines Netflix Metaflow Onepanel Orchest Ploomber Seldon SQLFlow","title":"Ecosystem"},{"location":"#client-libraries","text":"Check out our Java, Golang and Python clients .","title":"Client Libraries"},{"location":"#quickstart","text":"Get started here Walk-through examples","title":"Quickstart"},{"location":"#documentation","text":"View the docs","title":"Documentation"},{"location":"#features","text":"An incomplete list of features Argo Workflows provide: UI to visualize and manage Workflows Artifact support (S3, Artifactory, Alibaba Cloud OSS, Azure Blob Storage, HTTP, Git, GCS, raw) Workflow templating to store commonly used Workflows in the cluster Archiving Workflows after executing for later access Scheduled workflows using cron Server interface with REST API (HTTP and GRPC) DAG or Steps based declaration of workflows Step level input & outputs (artifacts/parameters) Loops Parameterization Conditionals Timeouts (step & workflow level) Retry (step & workflow level) Resubmit (memoized) Suspend & Resume Cancellation K8s resource orchestration Exit Hooks (notifications, cleanup) Garbage collection of completed workflow Scheduling (affinity/tolerations/node selectors) Volumes (ephemeral/existing) Parallelism limits Daemoned steps DinD (docker-in-docker) Script steps Event emission Prometheus metrics Multiple executors Multiple pod and workflow garbage collection strategies Automatically calculated resource usage per step Java/Golang/Python SDKs Pod Disruption Budget support Single-sign on (OAuth2/OIDC) Webhook triggering CLI Out-of-the box and custom Prometheus metrics Windows container support Embedded widgets Multiplex log viewer","title":"Features"},{"location":"#community-meetings","text":"We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here . Participation in the Argo Workflows project is governed by the CNCF Code of Conduct","title":"Community Meetings"},{"location":"#community-blogs-and-presentations","text":"Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows Argo Ansible role: Provisioning Argo Workflows on OpenShift Argo Workflows vs Apache Airflow CI/CD with Argo on Kubernetes Distributed Machine Learning Patterns from Manning Publication Running Argo Workflows Across Multiple Kubernetes Clusters Open Source Model Management Roundup: Polyaxon, Argo, and Seldon Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow Argo integration review TGI Kubernetes with Joe Beda: Argo workflow system","title":"Community Blogs and Presentations"},{"location":"#project-resources","text":"Argo Project GitHub organization Argo Website Argo Slack","title":"Project Resources"},{"location":"#security","text":"See SECURITY.md .","title":"Security"},{"location":"CONTRIBUTING/","text":"Contributing \u00b6 How To Provide Feedback \u00b6 Please raise an issue in Github . Code of Conduct \u00b6 See CNCF Code of Conduct . Community Meetings (monthly) \u00b6 A monthly opportunity for users and maintainers of Workflows and Events to share their current work and hear about what\u2019s coming on the roadmap. Please join us! For Community Meeting information, minutes and recordings please see here . Contributor Meetings (twice monthly) \u00b6 A weekly opportunity for committers and maintainers of Workflows and Events to discuss their current work and talk about what\u2019s next. Feel free to join us! For Contributor Meeting information, minutes and recordings please see here . How To Contribute \u00b6 We're always looking for contributors. Documentation - something missing or unclear? Please submit a pull request! Code contribution - investigate a good first issue , or anything not assigned. You can work on an issue without being assigned. Join the #argo-contributors channel on our Slack . Running Locally \u00b6 To run Argo Workflows locally for development: running locally . Committing \u00b6 See the Committing Guidelines . Dependencies \u00b6 Dependencies increase the risk of security issues and have on-going maintenance costs. The dependency must pass these test: A strong use case. It has an acceptable license (e.g. MIT). It is actively maintained. It has no security issues. Example, should we add fasttemplate , view the Snyk report : Test Outcome A strong use case. \u274c Fail. We can use text/template . It has an acceptable license (e.g. MIT) \u2705 Pass. MIT license. It is actively maintained. \u274c Fail. Project is inactive. It has no security issues. \u2705 Pass. No known security issues. No, we should not add that dependency. Test Policy \u00b6 Changes without either unit or e2e tests are unlikely to be accepted. See the pull request template . Contributor Workshop \u00b6 Please check out the following resources if you are interested in contributing: 90m hands-on contributor workshop . Deep-dive into components and hands-on experiments . Architecture overview .","title":"Contributing"},{"location":"CONTRIBUTING/#contributing","text":"","title":"Contributing"},{"location":"CONTRIBUTING/#how-to-provide-feedback","text":"Please raise an issue in Github .","title":"How To Provide Feedback"},{"location":"CONTRIBUTING/#code-of-conduct","text":"See CNCF Code of Conduct .","title":"Code of Conduct"},{"location":"CONTRIBUTING/#community-meetings-monthly","text":"A monthly opportunity for users and maintainers of Workflows and Events to share their current work and hear about what\u2019s coming on the roadmap. Please join us! For Community Meeting information, minutes and recordings please see here .","title":"Community Meetings (monthly)"},{"location":"CONTRIBUTING/#contributor-meetings-twice-monthly","text":"A weekly opportunity for committers and maintainers of Workflows and Events to discuss their current work and talk about what\u2019s next. Feel free to join us! For Contributor Meeting information, minutes and recordings please see here .","title":"Contributor Meetings (twice monthly)"},{"location":"CONTRIBUTING/#how-to-contribute","text":"We're always looking for contributors. Documentation - something missing or unclear? Please submit a pull request! Code contribution - investigate a good first issue , or anything not assigned. You can work on an issue without being assigned. Join the #argo-contributors channel on our Slack .","title":"How To Contribute"},{"location":"CONTRIBUTING/#running-locally","text":"To run Argo Workflows locally for development: running locally .","title":"Running Locally"},{"location":"CONTRIBUTING/#committing","text":"See the Committing Guidelines .","title":"Committing"},{"location":"CONTRIBUTING/#dependencies","text":"Dependencies increase the risk of security issues and have on-going maintenance costs. The dependency must pass these test: A strong use case. It has an acceptable license (e.g. MIT). It is actively maintained. It has no security issues. Example, should we add fasttemplate , view the Snyk report : Test Outcome A strong use case. \u274c Fail. We can use text/template . It has an acceptable license (e.g. MIT) \u2705 Pass. MIT license. It is actively maintained. \u274c Fail. Project is inactive. It has no security issues. \u2705 Pass. No known security issues. No, we should not add that dependency.","title":"Dependencies"},{"location":"CONTRIBUTING/#test-policy","text":"Changes without either unit or e2e tests are unlikely to be accepted. See the pull request template .","title":"Test Policy"},{"location":"CONTRIBUTING/#contributor-workshop","text":"Please check out the following resources if you are interested in contributing: 90m hands-on contributor workshop . Deep-dive into components and hands-on experiments . Architecture overview .","title":"Contributor Workshop"},{"location":"access-token/","text":"Access Token \u00b6 Overview \u00b6 If you want to automate tasks with the Argo Server API or CLI, you will need an access token. Prerequisites \u00b6 Firstly, create a role with minimal permissions. This example role for jenkins only permission to update and list workflows: kubectl create role jenkins --verb = list,update --resource = workflows.argoproj.io Create a service account for your service: kubectl create sa jenkins Tip for Tokens Creation \u00b6 Create a unique service account for each client: (a) you'll be able to correctly secure your workflows (b) revoke the token without impacting other clients. Bind the service account to the role (in this case in the argo namespace): kubectl create rolebinding jenkins --role = jenkins --serviceaccount = argo:jenkins Token Creation \u00b6 You now need to create a secret to hold your token: kubectl apply -f - </oauth2/callback. It must be # browser-accessible. redirectUrl: https://argo-workflows.mydomain.com/oauth2/callback Example Helm chart configuration for authenticating against Argo CD's Dex \u00b6 argo-cd/values.yaml : dex : image : tag : v2.35.0 env : - name : ARGO_WORKFLOWS_SSO_CLIENT_SECRET valueFrom : secretKeyRef : name : argo-workflows-sso key : client-secret server : config : dex.config : | staticClients: - id: argo-workflows-sso name: Argo Workflow redirectURIs: - https://argo-workflows.mydomain.com/oauth2/callback secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET argo-workflows/values.yaml : server : extraArgs : - --auth-mode=sso sso : issuer : https://argo-cd.mydomain.com/api/dex # sessionExpiry defines how long your login is valid for in hours. (optional, default: 10h) sessionExpiry : 240h clientId : name : argo-workflows-sso key : client-id clientSecret : name : argo-workflows-sso key : client-secret redirectUrl : https://argo-workflows.mydomain.com/oauth2/callback","title":"Use Argo CD Dex for authentication"},{"location":"argo-server-sso-argocd/#use-argo-cd-dex-for-authentication","text":"It is possible to have the Argo Workflows Server use the Argo CD Dex instance for authentication, for instance if you use Okta with SAML which cannot integrate with Argo Workflows directly. In order to make this happen, you will need the following: You must be using at least Dex v2.35.0 , because that's when staticClients[].secretEnv was added. That means Argo CD 1.7.12 and above. A secret containing two keys, client-id and client-secret to be used by both Dex and Argo Workflows Server. client-id is argo-workflows-sso in this example, client-secret can be any random string. If Argo CD and Argo Workflows are installed in different namespaces the secret must be present in both of them. Example: apiVersion : v1 kind : Secret metadata : name : argo-workflows-sso data : # client-id is 'argo-workflows-sso' client-id : YXJnby13b3JrZmxvd3Mtc3Nv # client-secret is 'MY-SECRET-STRING-CAN-BE-UUID' client-secret : TVktU0VDUkVULVNUUklORy1DQU4tQkUtVVVJRA== --auth-mode=sso server argument added A Dex staticClients configured for argo-workflows-sso The sso configuration filled out in Argo Workflows Server to match","title":"Use Argo CD Dex for authentication"},{"location":"argo-server-sso-argocd/#example-manifests-for-authenticating-against-argo-cds-dex-kustomize","text":"In Argo CD, add an environment variable to Dex deployment and configuration: --- apiVersion : apps/v1 kind : Deployment metadata : name : argocd-dex-server spec : template : spec : containers : - name : dex env : - name : ARGO_WORKFLOWS_SSO_CLIENT_SECRET valueFrom : secretKeyRef : name : argo-workflows-sso key : client-secret --- apiVersion : v1 kind : ConfigMap metadata : name : argocd-cm data : # Kustomize sees the value of dex.config as a single string instead of yaml. It will not merge # Dex settings, but instead it will replace the entire configuration with the settings below, # so add these to the existing config instead of setting them in a separate file dex.config : | # Setting staticClients allows Argo Workflows to use Argo CD's Dex installation for authentication staticClients: - id: argo-workflows-sso name: Argo Workflow redirectURIs: - https://argo-workflows.mydomain.com/oauth2/callback secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET Note that the id field of staticClients must match the client-id . In Argo Workflows add --auth-mode=sso argument to argo-server deployment. --- apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : template : spec : containers : - name : argo-server args : - server - --auth-mode=sso --- apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # SSO Configuration for the Argo server. # You must also start argo server with `--auth-mode sso`. # https://argoproj.github.io/argo-workflows/argo-server-auth-mode/ sso : | # This is the root URL of the OIDC provider (required). issuer: https://argo-cd.mydomain.com/api/dex # This is name of the secret and the key in it that contain OIDC client # ID issued to the application by the provider (required). clientId: name: argo-workflows-sso key: client-id # This is name of the secret and the key in it that contain OIDC client # secret issued to the application by the provider (required). clientSecret: name: argo-workflows-sso key: client-secret # This is the redirect URL supplied to the provider (required). It must # be in the form /oauth2/callback. It must be # browser-accessible. redirectUrl: https://argo-workflows.mydomain.com/oauth2/callback","title":"Example manifests for authenticating against Argo CD's Dex (Kustomize)"},{"location":"argo-server-sso-argocd/#example-helm-chart-configuration-for-authenticating-against-argo-cds-dex","text":"argo-cd/values.yaml : dex : image : tag : v2.35.0 env : - name : ARGO_WORKFLOWS_SSO_CLIENT_SECRET valueFrom : secretKeyRef : name : argo-workflows-sso key : client-secret server : config : dex.config : | staticClients: - id: argo-workflows-sso name: Argo Workflow redirectURIs: - https://argo-workflows.mydomain.com/oauth2/callback secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET argo-workflows/values.yaml : server : extraArgs : - --auth-mode=sso sso : issuer : https://argo-cd.mydomain.com/api/dex # sessionExpiry defines how long your login is valid for in hours. (optional, default: 10h) sessionExpiry : 240h clientId : name : argo-workflows-sso key : client-id clientSecret : name : argo-workflows-sso key : client-secret redirectUrl : https://argo-workflows.mydomain.com/oauth2/callback","title":"Example Helm chart configuration for authenticating against Argo CD's Dex"},{"location":"argo-server-sso/","text":"Argo Server SSO \u00b6 v2.9 and after It is possible to use Dex for authentication. This document describes how to set up Argo Workflows and Argo CD so that Argo Workflows uses Argo CD's Dex server for authentication. To start Argo Server with SSO \u00b6 Firstly, configure the settings workflow-controller-configmap.yaml with the correct OAuth 2 values. If working towards an OIDC configuration the Argo CD project has guides on its similar (though different) process for setting up OIDC providers. It also includes examples for specific providers. The main difference is that the Argo CD docs mention that their callback address endpoint is /auth/callback . For Argo Workflows, the default format is /oauth2/callback as shown in this comment in the default values.yaml file in the helm chart. Next, create the Kubernetes secrets for holding the OAuth2 client-id and client-secret . You may refer to the kubernetes documentation on Managing secrets . For example by using kubectl with literals: kubectl create secret -n argo generic client-id-secret \\ --from-literal = client-id-key = foo kubectl create secret -n argo generic client-secret-secret \\ --from-literal = client-secret-key = bar Then, start the Argo Server using the SSO auth mode : argo server --auth-mode sso --auth-mode ... Token Revocation \u00b6 v2.12 and after As of v2.12 we issue a JWE token for users rather than give them the ID token from your OAuth2 provider. This token is opaque and has a longer expiry time (10h by default). The token encryption key is automatically generated by the Argo Server and stored in a Kubernetes secret name sso . You can revoke all tokens by deleting the encryption key and restarting the Argo Server (so it generates a new key). kubectl delete secret sso Warning The old key will be in the memory the any running Argo Server, and they will therefore accept and user with token encrypted using the old key. Every Argo Server MUST be restarted. All users will need to log in again. Sorry. SSO RBAC \u00b6 v2.12 and after You can optionally add RBAC to SSO. This allows you to give different users different access levels. Except for client auth mode, all users of the Argo Server must ultimately use a service account. So we allow you to define rules that map a user (maybe using their OIDC groups) to a service account in the same namespace as argo server by annotating the service account. To allow service accounts to manage resources in other namespaces create a role and role binding in the target namespace. RBAC config is installation-level, so any changes will need to be made by the team that installed Argo. Many complex rules will be burdensome on that team. Firstly, enable the rbac: setting in workflow-controller-configmap.yaml . You likely want to configure RBAC using groups, so add scopes: to the SSO settings: sso : # ... scopes : - groups rbac : enabled : true Note Not all OIDC providers support the groups scope. Please speak to your provider about their options. To configure a service account to be used, annotate it: apiVersion : v1 kind : ServiceAccount metadata : name : admin-user annotations : # The rule is an expression used to determine if this service account # should be used. # * `groups` - an array of the OIDC groups # * `iss` - the issuer (\"argo-server\") # * `sub` - the subject (typically the username) # Must evaluate to a boolean. # If you want an account to be the default to use, this rule can be \"true\". # Details of the expression language are available in # https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md. workflows.argoproj.io/rbac-rule : \"'admin' in groups\" # The precedence is used to determine which service account to use whe # Precedence is an integer. It may be negative. If omitted, it defaults to \"0\". # Numerically higher values have higher precedence (not lower, which maybe # counter-intuitive to you). # If two rules match and have the same precedence, then which one used will # be arbitrary. workflows.argoproj.io/rbac-rule-precedence : \"1\" If no rule matches, we deny the user access. Tip: You'll probably want to configure a default account to use if no other rule matches, e.g. a read-only account, you can do this as follows: metadata : name : read-only annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" The precedence must be the lowest of all your service accounts. As of Kubernetes v1.24, secrets for a service account token are no longer automatically created. Therefore, service account secrets for SSO RBAC must be created manually. See Manually create secrets for detailed instructions. SSO RBAC Namespace Delegation \u00b6 v3.3 and after You can optionally configure RBAC SSO per namespace. Typically, on organization has a Kubernetes cluster and a central team (the owner of the cluster) manages the cluster. Along with this, there are multiple namespaces which are owned by individual teams. This feature would help namespace owners to define RBAC for their own namespace. The feature is currently in beta. To enable the feature, set env variable SSO_DELEGATE_RBAC_TO_NAMESPACE=true in your argo-server deployment. Recommended usage \u00b6 Configure a default account in the installation namespace that allows access to all users of your organization. This service account allows a user to login to the cluster. You could optionally add a workflow read-only role and role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : user-default-login annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" Note All users MUST map to a cluster service account (such as the one above) before a namespace service account can apply. Now, for the namespace that you own, configure a service account that allows members of your team to perform operations in your namespace. Make sure that the precedence of the namespace service account is higher than the precedence of the login service account. Create an appropriate role for this service account and bind it with a role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : my-namespace-read-write-user namespace : my-namespace annotations : workflows.argoproj.io/rbac-rule : \"'my-team' in groups\" workflows.argoproj.io/rbac-rule-precedence : \"1\" With this configuration, when a user is logged in via SSO, makes a request in my-namespace , and the rbac-rule matches, this service account allows the user to perform that operation. If no service account matches in the namespace, the first service account ( user-default-login ) and its associated role will be used to perform the operation. SSO Login Time \u00b6 v2.12 and after By default, your SSO session will expire after 10 hours. You can change this by adding a sessionExpiry to your workflow-controller-configmap.yaml under the SSO heading. sso : # Expiry defines how long your login is valid for in hours. (optional) sessionExpiry : 240h Custom claims \u00b6 v3.1.4 and after If your OIDC provider provides groups information with a claim name other than groups , you could configure config-map to specify custom claim name for groups. Argo now arbitrary custom claims and any claim can be used for expr eval . However, since group information is displayed in UI, it still needs to be an array of strings with group names as elements. The customClaim in this case will be mapped to groups key and we can use the same key groups for evaluating our expressions sso : # Specify custom claim name for OIDC groups. customGroupClaimName : argo_groups If your OIDC provider provides groups information only using the user-info endpoint (e.g. Okta), you could configure userInfoPath to specify the user info endpoint that contains the groups claim. sso : userInfoPath : /oauth2/v1/userinfo Example Expression \u00b6 # assuming customClaimGroupName: argo_groups workflows.argoproj.io/rbac-rule: \"'argo_admins' in groups\" Filtering groups \u00b6 v3.5 and above You can configure filterGroupsRegex to filter the groups returned by the OIDC provider. Some use-cases for this include: You have multiple applications using the same OIDC provider, and you only want to use groups that are relevant to Argo Workflows. You have many groups and exceed the 4KB cookie size limit (cookies are used to store authentication tokens). If this occurs, login will fail. sso : # Specify a list of regular expressions to filter the groups returned by the OIDC provider. # A logical \"OR\" is used between each regex in the list filterGroupsRegex : - \".*argo-wf.*\" - \".*argo-workflow.*\"","title":"Argo Server SSO"},{"location":"argo-server-sso/#argo-server-sso","text":"v2.9 and after It is possible to use Dex for authentication. This document describes how to set up Argo Workflows and Argo CD so that Argo Workflows uses Argo CD's Dex server for authentication.","title":"Argo Server SSO"},{"location":"argo-server-sso/#to-start-argo-server-with-sso","text":"Firstly, configure the settings workflow-controller-configmap.yaml with the correct OAuth 2 values. If working towards an OIDC configuration the Argo CD project has guides on its similar (though different) process for setting up OIDC providers. It also includes examples for specific providers. The main difference is that the Argo CD docs mention that their callback address endpoint is /auth/callback . For Argo Workflows, the default format is /oauth2/callback as shown in this comment in the default values.yaml file in the helm chart. Next, create the Kubernetes secrets for holding the OAuth2 client-id and client-secret . You may refer to the kubernetes documentation on Managing secrets . For example by using kubectl with literals: kubectl create secret -n argo generic client-id-secret \\ --from-literal = client-id-key = foo kubectl create secret -n argo generic client-secret-secret \\ --from-literal = client-secret-key = bar Then, start the Argo Server using the SSO auth mode : argo server --auth-mode sso --auth-mode ...","title":"To start Argo Server with SSO"},{"location":"argo-server-sso/#token-revocation","text":"v2.12 and after As of v2.12 we issue a JWE token for users rather than give them the ID token from your OAuth2 provider. This token is opaque and has a longer expiry time (10h by default). The token encryption key is automatically generated by the Argo Server and stored in a Kubernetes secret name sso . You can revoke all tokens by deleting the encryption key and restarting the Argo Server (so it generates a new key). kubectl delete secret sso Warning The old key will be in the memory the any running Argo Server, and they will therefore accept and user with token encrypted using the old key. Every Argo Server MUST be restarted. All users will need to log in again. Sorry.","title":"Token Revocation"},{"location":"argo-server-sso/#sso-rbac","text":"v2.12 and after You can optionally add RBAC to SSO. This allows you to give different users different access levels. Except for client auth mode, all users of the Argo Server must ultimately use a service account. So we allow you to define rules that map a user (maybe using their OIDC groups) to a service account in the same namespace as argo server by annotating the service account. To allow service accounts to manage resources in other namespaces create a role and role binding in the target namespace. RBAC config is installation-level, so any changes will need to be made by the team that installed Argo. Many complex rules will be burdensome on that team. Firstly, enable the rbac: setting in workflow-controller-configmap.yaml . You likely want to configure RBAC using groups, so add scopes: to the SSO settings: sso : # ... scopes : - groups rbac : enabled : true Note Not all OIDC providers support the groups scope. Please speak to your provider about their options. To configure a service account to be used, annotate it: apiVersion : v1 kind : ServiceAccount metadata : name : admin-user annotations : # The rule is an expression used to determine if this service account # should be used. # * `groups` - an array of the OIDC groups # * `iss` - the issuer (\"argo-server\") # * `sub` - the subject (typically the username) # Must evaluate to a boolean. # If you want an account to be the default to use, this rule can be \"true\". # Details of the expression language are available in # https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md. workflows.argoproj.io/rbac-rule : \"'admin' in groups\" # The precedence is used to determine which service account to use whe # Precedence is an integer. It may be negative. If omitted, it defaults to \"0\". # Numerically higher values have higher precedence (not lower, which maybe # counter-intuitive to you). # If two rules match and have the same precedence, then which one used will # be arbitrary. workflows.argoproj.io/rbac-rule-precedence : \"1\" If no rule matches, we deny the user access. Tip: You'll probably want to configure a default account to use if no other rule matches, e.g. a read-only account, you can do this as follows: metadata : name : read-only annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" The precedence must be the lowest of all your service accounts. As of Kubernetes v1.24, secrets for a service account token are no longer automatically created. Therefore, service account secrets for SSO RBAC must be created manually. See Manually create secrets for detailed instructions.","title":"SSO RBAC"},{"location":"argo-server-sso/#sso-rbac-namespace-delegation","text":"v3.3 and after You can optionally configure RBAC SSO per namespace. Typically, on organization has a Kubernetes cluster and a central team (the owner of the cluster) manages the cluster. Along with this, there are multiple namespaces which are owned by individual teams. This feature would help namespace owners to define RBAC for their own namespace. The feature is currently in beta. To enable the feature, set env variable SSO_DELEGATE_RBAC_TO_NAMESPACE=true in your argo-server deployment.","title":"SSO RBAC Namespace Delegation"},{"location":"argo-server-sso/#recommended-usage","text":"Configure a default account in the installation namespace that allows access to all users of your organization. This service account allows a user to login to the cluster. You could optionally add a workflow read-only role and role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : user-default-login annotations : workflows.argoproj.io/rbac-rule : \"true\" workflows.argoproj.io/rbac-rule-precedence : \"0\" Note All users MUST map to a cluster service account (such as the one above) before a namespace service account can apply. Now, for the namespace that you own, configure a service account that allows members of your team to perform operations in your namespace. Make sure that the precedence of the namespace service account is higher than the precedence of the login service account. Create an appropriate role for this service account and bind it with a role-binding. apiVersion : v1 kind : ServiceAccount metadata : name : my-namespace-read-write-user namespace : my-namespace annotations : workflows.argoproj.io/rbac-rule : \"'my-team' in groups\" workflows.argoproj.io/rbac-rule-precedence : \"1\" With this configuration, when a user is logged in via SSO, makes a request in my-namespace , and the rbac-rule matches, this service account allows the user to perform that operation. If no service account matches in the namespace, the first service account ( user-default-login ) and its associated role will be used to perform the operation.","title":"Recommended usage"},{"location":"argo-server-sso/#sso-login-time","text":"v2.12 and after By default, your SSO session will expire after 10 hours. You can change this by adding a sessionExpiry to your workflow-controller-configmap.yaml under the SSO heading. sso : # Expiry defines how long your login is valid for in hours. (optional) sessionExpiry : 240h","title":"SSO Login Time"},{"location":"argo-server-sso/#custom-claims","text":"v3.1.4 and after If your OIDC provider provides groups information with a claim name other than groups , you could configure config-map to specify custom claim name for groups. Argo now arbitrary custom claims and any claim can be used for expr eval . However, since group information is displayed in UI, it still needs to be an array of strings with group names as elements. The customClaim in this case will be mapped to groups key and we can use the same key groups for evaluating our expressions sso : # Specify custom claim name for OIDC groups. customGroupClaimName : argo_groups If your OIDC provider provides groups information only using the user-info endpoint (e.g. Okta), you could configure userInfoPath to specify the user info endpoint that contains the groups claim. sso : userInfoPath : /oauth2/v1/userinfo","title":"Custom claims"},{"location":"argo-server-sso/#example-expression","text":"# assuming customClaimGroupName: argo_groups workflows.argoproj.io/rbac-rule: \"'argo_admins' in groups\"","title":"Example Expression"},{"location":"argo-server-sso/#filtering-groups","text":"v3.5 and above You can configure filterGroupsRegex to filter the groups returned by the OIDC provider. Some use-cases for this include: You have multiple applications using the same OIDC provider, and you only want to use groups that are relevant to Argo Workflows. You have many groups and exceed the 4KB cookie size limit (cookies are used to store authentication tokens). If this occurs, login will fail. sso : # Specify a list of regular expressions to filter the groups returned by the OIDC provider. # A logical \"OR\" is used between each regex in the list filterGroupsRegex : - \".*argo-wf.*\" - \".*argo-workflow.*\"","title":"Filtering groups"},{"location":"argo-server/","text":"Argo Server \u00b6 v2.5 and after HTTP vs HTTPS Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. The Argo Server is a server that exposes an API and UI for workflows. You'll need to use this if you want to offload large workflows or the workflow archive . You can run this in either \"hosted\" or \"local\" mode. It replaces the Argo UI. Hosted Mode \u00b6 Use this mode if: You want a drop-in replacement for the Argo UI. If you need to prevent users from directly accessing the database. Hosted mode is provided as part of the standard manifests , specifically in argo-server-deployment.yaml . Local Mode \u00b6 Use this mode if: You want something that does not require complex set-up. You do not need to run a database. To run locally: argo server This will start a server on port 2746 which you can view . Options \u00b6 Auth Mode \u00b6 See auth . Managed Namespace \u00b6 See managed namespace . Base HREF \u00b6 If the server is running behind reverse proxy with a sub-path different from / (for example, /argo ), you can set an alternative sub-path with the --basehref flag or the BASE_HREF environment variable. You probably now should read how to set-up an ingress Transport Layer Security \u00b6 See TLS . SSO \u00b6 See SSO . See here about sharing Argo CD's Dex with Argo Workflows. Access the Argo Workflows UI \u00b6 By default, the Argo UI service is not exposed with an external IP. To access the UI, use one of the following: kubectl port-forward \u00b6 kubectl -n argo port-forward svc/argo-server 2746 :2746 Then visit: https://localhost:2746 Expose a LoadBalancer \u00b6 Update the service to be of type LoadBalancer . kubectl patch svc argo-server -n argo -p '{\"spec\": {\"type\": \"LoadBalancer\"}}' Then wait for the external IP to be made available: kubectl get svc argo-server -n argo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE argo-server LoadBalancer 10 .43.43.130 172 .18.0.2 2746 :30008/TCP 18h Ingress \u00b6 You can get ingress working as follows: Add BASE_HREF as environment variable to deployment/argo-server . Do not forget to add a trailing '/' character. --- apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server env : - name : BASE_HREF value : /argo/ image : argoproj/argocli:latest name : argo-server ... Create a ingress, with the annotation ingress.kubernetes.io/rewrite-target: / : If TLS is enabled (default in v3.0 and after), the ingress controller must be told that the backend uses HTTPS. The method depends on the ingress controller, e.g. Traefik expects an ingress.kubernetes.io/protocol annotation, while ingress-nginx uses nginx.ingress.kubernetes.io/backend-protocol apiVersion : networking.k8s.io/v1beta1 kind : Ingress metadata : name : argo-server annotations : ingress.kubernetes.io/rewrite-target : /$2 ingress.kubernetes.io/protocol : https # Traefik nginx.ingress.kubernetes.io/backend-protocol : https # ingress-nginx spec : rules : - http : paths : - backend : serviceName : argo-server servicePort : 2746 path : /argo(/|$)(.*) Learn more Security \u00b6 Users should consider the following in their set-up of the Argo Server: API Authentication Rate Limiting \u00b6 Argo Server does not perform authentication directly. It delegates this to either the Kubernetes API Server (when --auth-mode=client ) and the OAuth provider (when --auth-mode=sso ). In each case, it is recommended that the delegate implements any authentication rate limiting you need. IP Address Logging \u00b6 Argo Server does not log the IP addresses of API requests. We recommend you put the Argo Server behind a load balancer, and that load balancer is configured to log the IP addresses of requests that return authentication or authorization errors. Rate Limiting \u00b6 v3.4 and after Argo Server by default rate limits to 1000 per IP per minute, you can configure it through --api-rate-limit . You can access additional information through the following headers. X-Rate-Limit-Limit - the rate limit ceiling that is applicable for the current request. X-Rate-Limit-Remaining - the number of requests left for the current rate-limit window. X-Rate-Limit-Reset - the time at which the rate limit resets, specified in UTC time. Retry-After - indicate when a client should retry requests (when the rate limit expires), in UTC time.","title":"Argo Server"},{"location":"argo-server/#argo-server","text":"v2.5 and after HTTP vs HTTPS Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. The Argo Server is a server that exposes an API and UI for workflows. You'll need to use this if you want to offload large workflows or the workflow archive . You can run this in either \"hosted\" or \"local\" mode. It replaces the Argo UI.","title":"Argo Server"},{"location":"argo-server/#hosted-mode","text":"Use this mode if: You want a drop-in replacement for the Argo UI. If you need to prevent users from directly accessing the database. Hosted mode is provided as part of the standard manifests , specifically in argo-server-deployment.yaml .","title":"Hosted Mode"},{"location":"argo-server/#local-mode","text":"Use this mode if: You want something that does not require complex set-up. You do not need to run a database. To run locally: argo server This will start a server on port 2746 which you can view .","title":"Local Mode"},{"location":"argo-server/#options","text":"","title":"Options"},{"location":"argo-server/#auth-mode","text":"See auth .","title":"Auth Mode"},{"location":"argo-server/#managed-namespace","text":"See managed namespace .","title":"Managed Namespace"},{"location":"argo-server/#base-href","text":"If the server is running behind reverse proxy with a sub-path different from / (for example, /argo ), you can set an alternative sub-path with the --basehref flag or the BASE_HREF environment variable. You probably now should read how to set-up an ingress","title":"Base HREF"},{"location":"argo-server/#transport-layer-security","text":"See TLS .","title":"Transport Layer Security"},{"location":"argo-server/#sso","text":"See SSO . See here about sharing Argo CD's Dex with Argo Workflows.","title":"SSO"},{"location":"argo-server/#access-the-argo-workflows-ui","text":"By default, the Argo UI service is not exposed with an external IP. To access the UI, use one of the following:","title":"Access the Argo Workflows UI"},{"location":"argo-server/#kubectl-port-forward","text":"kubectl -n argo port-forward svc/argo-server 2746 :2746 Then visit: https://localhost:2746","title":"kubectl port-forward"},{"location":"argo-server/#expose-a-loadbalancer","text":"Update the service to be of type LoadBalancer . kubectl patch svc argo-server -n argo -p '{\"spec\": {\"type\": \"LoadBalancer\"}}' Then wait for the external IP to be made available: kubectl get svc argo-server -n argo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE argo-server LoadBalancer 10 .43.43.130 172 .18.0.2 2746 :30008/TCP 18h","title":"Expose a LoadBalancer"},{"location":"argo-server/#ingress","text":"You can get ingress working as follows: Add BASE_HREF as environment variable to deployment/argo-server . Do not forget to add a trailing '/' character. --- apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server env : - name : BASE_HREF value : /argo/ image : argoproj/argocli:latest name : argo-server ... Create a ingress, with the annotation ingress.kubernetes.io/rewrite-target: / : If TLS is enabled (default in v3.0 and after), the ingress controller must be told that the backend uses HTTPS. The method depends on the ingress controller, e.g. Traefik expects an ingress.kubernetes.io/protocol annotation, while ingress-nginx uses nginx.ingress.kubernetes.io/backend-protocol apiVersion : networking.k8s.io/v1beta1 kind : Ingress metadata : name : argo-server annotations : ingress.kubernetes.io/rewrite-target : /$2 ingress.kubernetes.io/protocol : https # Traefik nginx.ingress.kubernetes.io/backend-protocol : https # ingress-nginx spec : rules : - http : paths : - backend : serviceName : argo-server servicePort : 2746 path : /argo(/|$)(.*) Learn more","title":"Ingress"},{"location":"argo-server/#security","text":"Users should consider the following in their set-up of the Argo Server:","title":"Security"},{"location":"argo-server/#api-authentication-rate-limiting","text":"Argo Server does not perform authentication directly. It delegates this to either the Kubernetes API Server (when --auth-mode=client ) and the OAuth provider (when --auth-mode=sso ). In each case, it is recommended that the delegate implements any authentication rate limiting you need.","title":"API Authentication Rate Limiting"},{"location":"argo-server/#ip-address-logging","text":"Argo Server does not log the IP addresses of API requests. We recommend you put the Argo Server behind a load balancer, and that load balancer is configured to log the IP addresses of requests that return authentication or authorization errors.","title":"IP Address Logging"},{"location":"argo-server/#rate-limiting","text":"v3.4 and after Argo Server by default rate limits to 1000 per IP per minute, you can configure it through --api-rate-limit . You can access additional information through the following headers. X-Rate-Limit-Limit - the rate limit ceiling that is applicable for the current request. X-Rate-Limit-Remaining - the number of requests left for the current rate-limit window. X-Rate-Limit-Reset - the time at which the rate limit resets, specified in UTC time. Retry-After - indicate when a client should retry requests (when the rate limit expires), in UTC time.","title":"Rate Limiting"},{"location":"artifact-repository-ref/","text":"Artifact Repository Ref \u00b6 v2.9 and after You can reduce duplication in your templates by configuring repositories that can be accessed by any workflow. This can also remove sensitive information from your templates. Create a suitable config map in either (a) your workflows namespace or (b) in the managed namespace: apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : my-artifact-repository annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-v1-s3-artifact-repository data : default-v1-s3-artifact-repository : | s3: bucket: my-bucket endpoint: minio:9000 insecure: true accessKeySecret: name: my-minio-cred key: accesskey secretKeySecret: name: my-minio-cred key: secretkey v2-s3-artifact-repository : | s3: ... You can override the artifact repository for a workflow as follows: spec : artifactRepositoryRef : configMap : my-artifact-repository # default is \"artifact-repositories\" key : v2-s3-artifact-repository # default can be set by the `workflows.argoproj.io/default-artifact-repository` annotation in config map. This feature gives maximum benefit when used with key-only artifacts . Reference .","title":"Artifact Repository Ref"},{"location":"artifact-repository-ref/#artifact-repository-ref","text":"v2.9 and after You can reduce duplication in your templates by configuring repositories that can be accessed by any workflow. This can also remove sensitive information from your templates. Create a suitable config map in either (a) your workflows namespace or (b) in the managed namespace: apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : my-artifact-repository annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-v1-s3-artifact-repository data : default-v1-s3-artifact-repository : | s3: bucket: my-bucket endpoint: minio:9000 insecure: true accessKeySecret: name: my-minio-cred key: accesskey secretKeySecret: name: my-minio-cred key: secretkey v2-s3-artifact-repository : | s3: ... You can override the artifact repository for a workflow as follows: spec : artifactRepositoryRef : configMap : my-artifact-repository # default is \"artifact-repositories\" key : v2-s3-artifact-repository # default can be set by the `workflows.argoproj.io/default-artifact-repository` annotation in config map. This feature gives maximum benefit when used with key-only artifacts . Reference .","title":"Artifact Repository Ref"},{"location":"artifact-visualization/","text":"Artifact Visualization \u00b6 since v3.4 Artifacts can be viewed in the UI. Use cases: Comparing ML pipeline runs from generated charts. Visualizing end results of ML pipeline runs. Debugging workflows where visual artifacts are the most helpful. Artifacts appear as elements in the workflow DAG that you can click on. When you click on the artifact, a panel appears. The first time this appears explanatory text is shown to help you understand if you might need to change your workflows to use this feature. Known file types such as images, text or HTML are shown in an inline-frame ( iframe ). Artifacts are sandboxed using a Content-Security-Policy that prevents JavaScript execution. JSON is shown with syntax highlighting. To start, take a look at the example . Artifact Types \u00b6 An artifact maybe a .tgz , file or directory. .tgz \u00b6 Viewing of .tgz is not supported in the UI. By default artifacts are compressed as a .tgz . Only artifacts that were not compressed can be viewed. To prevent compression, set archive to none to prevent compression: - name : artifact # ... archive : none : { } File \u00b6 Files maybe shown in the UI. To determine if a file can be shown, the UI checks if the artifact's file extension is supported. The extension is found in the artifact's key. To view a file, add the extension to the key: - name : single-file s3 : key : visualization.png Directory \u00b6 Directories are shown in the UI. The UI considers any key with a trailing-slash to be a directory. To view a directory, add a trailing-slash: - name : reports s3 : key : reports/ If the directory contains index.html , then that will be shown, otherwise a directory listing is displayed. \u26a0\ufe0f HTML files may contain CSS and images served from the same origin. Scripts are not allowed. Nothing may be remotely loaded. Security \u00b6 Content Security Policy \u00b6 We assume that artifacts are not trusted, so by default, artifacts are served with a Content-Security-Policy that disables JavaScript and remote files. This is similar to what happens when you include third-party scripts, such as analytic tracking, in your website. However, those tracking codes are normally served from a different domain to your main website. Artifacts are served from the same origin, so normal browser controls are not secure enough. Sub-Path Access \u00b6 Previously, users could access the artifacts of any workflows they could access. To allow HTML files to link to other files within their tree, you can now access any sub-paths of the artifact's key. Example: The artifact produces a folder in an S3 bucket named my-bucket , with a key report/ . You can also access anything matching report/* .","title":"Artifact Visualization"},{"location":"artifact-visualization/#artifact-visualization","text":"since v3.4 Artifacts can be viewed in the UI. Use cases: Comparing ML pipeline runs from generated charts. Visualizing end results of ML pipeline runs. Debugging workflows where visual artifacts are the most helpful. Artifacts appear as elements in the workflow DAG that you can click on. When you click on the artifact, a panel appears. The first time this appears explanatory text is shown to help you understand if you might need to change your workflows to use this feature. Known file types such as images, text or HTML are shown in an inline-frame ( iframe ). Artifacts are sandboxed using a Content-Security-Policy that prevents JavaScript execution. JSON is shown with syntax highlighting. To start, take a look at the example .","title":"Artifact Visualization"},{"location":"artifact-visualization/#artifact-types","text":"An artifact maybe a .tgz , file or directory.","title":"Artifact Types"},{"location":"artifact-visualization/#tgz","text":"Viewing of .tgz is not supported in the UI. By default artifacts are compressed as a .tgz . Only artifacts that were not compressed can be viewed. To prevent compression, set archive to none to prevent compression: - name : artifact # ... archive : none : { }","title":".tgz"},{"location":"artifact-visualization/#file","text":"Files maybe shown in the UI. To determine if a file can be shown, the UI checks if the artifact's file extension is supported. The extension is found in the artifact's key. To view a file, add the extension to the key: - name : single-file s3 : key : visualization.png","title":"File"},{"location":"artifact-visualization/#directory","text":"Directories are shown in the UI. The UI considers any key with a trailing-slash to be a directory. To view a directory, add a trailing-slash: - name : reports s3 : key : reports/ If the directory contains index.html , then that will be shown, otherwise a directory listing is displayed. \u26a0\ufe0f HTML files may contain CSS and images served from the same origin. Scripts are not allowed. Nothing may be remotely loaded.","title":"Directory"},{"location":"artifact-visualization/#security","text":"","title":"Security"},{"location":"artifact-visualization/#content-security-policy","text":"We assume that artifacts are not trusted, so by default, artifacts are served with a Content-Security-Policy that disables JavaScript and remote files. This is similar to what happens when you include third-party scripts, such as analytic tracking, in your website. However, those tracking codes are normally served from a different domain to your main website. Artifacts are served from the same origin, so normal browser controls are not secure enough.","title":"Content Security Policy"},{"location":"artifact-visualization/#sub-path-access","text":"Previously, users could access the artifacts of any workflows they could access. To allow HTML files to link to other files within their tree, you can now access any sub-paths of the artifact's key. Example: The artifact produces a folder in an S3 bucket named my-bucket , with a key report/ . You can also access anything matching report/* .","title":"Sub-Path Access"},{"location":"async-pattern/","text":"Asynchronous Job Pattern \u00b6 Introduction \u00b6 If triggering an external job (e.g. an Amazon EMR job) from Argo that does not run to completion in a container, there are two options: create a container that polls the external job completion status combine a trigger step that starts the job with a suspend step that is resumed by an API call to Argo when the external job is complete. This document describes the second option in more detail. The pattern \u00b6 The pattern involves two steps - the first step is a short-running step that triggers a long-running job outside Argo (e.g. an HTTP submission), and the second step is a suspend step that suspends workflow execution and is ultimately either resumed or stopped (i.e. failed) via a call to the Argo API when the job outside Argo succeeds or fails. When implemented as a WorkflowTemplate it can look something like this: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : external-job-template spec : entrypoint : run-external-job arguments : parameters : - name : \"job-cmd\" templates : - name : run-external-job inputs : parameters : - name : \"job-cmd\" value : \"{{workflow.parameters.job-cmd}}\" steps : - - name : trigger-job template : trigger-job arguments : parameters : - name : \"job-cmd\" value : \"{{inputs.parameters.job-cmd}}\" - - name : wait-completion template : wait-completion arguments : parameters : - name : uuid value : \"{{steps.trigger-job.outputs.result}}\" - name : trigger-job inputs : parameters : - name : \"job-cmd\" container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.job-cmd}}\" ] - name : wait-completion inputs : parameters : - name : uuid suspend : { } In this case the job-cmd parameter can be a command that makes an HTTP call via curl to an endpoint that returns a job UUID. More sophisticated submission and parsing of submission output could be done with something like a Python script step. On job completion the external job would need to call either resume if successful: You may need an access token . curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///resume --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\" }' or stop if unsuccessful: curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///stop --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\", \"message\": \"\" }' Retrying failed jobs \u00b6 Using argo retry on failed jobs that follow this pattern will cause Argo to re-attempt the suspend step without re-triggering the job. Instead you need to use the --restart-successful option, e.g. if using the template from above: argo retry --restart-successful --node-field-selector templateRef.template = run-external-job,phase = Failed","title":"Asynchronous Job Pattern"},{"location":"async-pattern/#asynchronous-job-pattern","text":"","title":"Asynchronous Job Pattern"},{"location":"async-pattern/#introduction","text":"If triggering an external job (e.g. an Amazon EMR job) from Argo that does not run to completion in a container, there are two options: create a container that polls the external job completion status combine a trigger step that starts the job with a suspend step that is resumed by an API call to Argo when the external job is complete. This document describes the second option in more detail.","title":"Introduction"},{"location":"async-pattern/#the-pattern","text":"The pattern involves two steps - the first step is a short-running step that triggers a long-running job outside Argo (e.g. an HTTP submission), and the second step is a suspend step that suspends workflow execution and is ultimately either resumed or stopped (i.e. failed) via a call to the Argo API when the job outside Argo succeeds or fails. When implemented as a WorkflowTemplate it can look something like this: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : external-job-template spec : entrypoint : run-external-job arguments : parameters : - name : \"job-cmd\" templates : - name : run-external-job inputs : parameters : - name : \"job-cmd\" value : \"{{workflow.parameters.job-cmd}}\" steps : - - name : trigger-job template : trigger-job arguments : parameters : - name : \"job-cmd\" value : \"{{inputs.parameters.job-cmd}}\" - - name : wait-completion template : wait-completion arguments : parameters : - name : uuid value : \"{{steps.trigger-job.outputs.result}}\" - name : trigger-job inputs : parameters : - name : \"job-cmd\" container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.job-cmd}}\" ] - name : wait-completion inputs : parameters : - name : uuid suspend : { } In this case the job-cmd parameter can be a command that makes an HTTP call via curl to an endpoint that returns a job UUID. More sophisticated submission and parsing of submission output could be done with something like a Python script step. On job completion the external job would need to call either resume if successful: You may need an access token . curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///resume --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\" }' or stop if unsuccessful: curl --request PUT \\ --url https://localhost:2746/api/v1/workflows///stop --header 'content-type: application/json' \\ --header \"Authorization: $ARGO_TOKEN \" \\ --data '{ \"namespace\": \"\", \"name\": \"\", \"nodeFieldSelector\": \"inputs.parameters.uuid.value=\", \"message\": \"\" }'","title":"The pattern"},{"location":"async-pattern/#retrying-failed-jobs","text":"Using argo retry on failed jobs that follow this pattern will cause Argo to re-attempt the suspend step without re-triggering the job. Instead you need to use the --restart-successful option, e.g. if using the template from above: argo retry --restart-successful --node-field-selector templateRef.template = run-external-job,phase = Failed","title":"Retrying failed jobs"},{"location":"client-libraries/","text":"Client Libraries \u00b6 This page contains an overview of the client libraries for using the Argo API from various programming languages. To write applications using the REST API, you do not need to implement the API calls and request/response types yourself. You can use a client library for the programming language you are using. Client libraries often handle common tasks such as authentication for you. Auto-generated client libraries \u00b6 The following client libraries are auto-generated using OpenAPI Generator . Please expect very minimal support from the Argo team. Language Client Library Examples/Docs Golang apiclient.go Example Java Java Python Python Community-maintained client libraries \u00b6 The following client libraries are provided and maintained by their authors, not the Argo team. Language Client Library Examples/Docs Python Couler Multi-workflow engine support Python SDK Python Hera Easy and accessible Argo workflows construction and submission in Python","title":"Client Libraries"},{"location":"client-libraries/#client-libraries","text":"This page contains an overview of the client libraries for using the Argo API from various programming languages. To write applications using the REST API, you do not need to implement the API calls and request/response types yourself. You can use a client library for the programming language you are using. Client libraries often handle common tasks such as authentication for you.","title":"Client Libraries"},{"location":"client-libraries/#auto-generated-client-libraries","text":"The following client libraries are auto-generated using OpenAPI Generator . Please expect very minimal support from the Argo team. Language Client Library Examples/Docs Golang apiclient.go Example Java Java Python Python","title":"Auto-generated client libraries"},{"location":"client-libraries/#community-maintained-client-libraries","text":"The following client libraries are provided and maintained by their authors, not the Argo team. Language Client Library Examples/Docs Python Couler Multi-workflow engine support Python SDK Python Hera Easy and accessible Argo workflows construction and submission in Python","title":"Community-maintained client libraries"},{"location":"cluster-workflow-templates/","text":"Cluster Workflow Templates \u00b6 v2.8 and after Introduction \u00b6 ClusterWorkflowTemplates are cluster scoped WorkflowTemplates . ClusterWorkflowTemplate can be created cluster scoped like ClusterRole and can be accessed across all namespaces in the cluster. WorkflowTemplates documentation link Defining ClusterWorkflowTemplate \u00b6 apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-whalesay-template spec : templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Referencing other ClusterWorkflowTemplates \u00b6 You can reference templates from other ClusterWorkflowTemplates using a templateRef field with clusterScope: true . Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate or ClusterWorkflowTemplate\" using this field name : cluster-workflow-template-whalesay-template # This is the name of the \"WorkflowTemplate or ClusterWorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference clusterScope : true # This field indicates this templateRef is pointing ClusterWorkflowTemplate arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" 2.9 and after Create Workflow from ClusterWorkflowTemplate Spec \u00b6 You can create Workflow from ClusterWorkflowTemplate spec using workflowTemplateRef with clusterScope: true . If you pass the arguments to created Workflow , it will be merged with cluster workflow template arguments Here is an example for ClusterWorkflowTemplate with entrypoint and arguments apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Here is an example for creating ClusterWorkflowTemplate as Workflow with passing entrypoint and arguments to ClusterWorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true Here is an example of a creating WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true Managing ClusterWorkflowTemplates \u00b6 CLI \u00b6 You can create some example templates as follows: argo cluster-template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/clustertemplates.yaml The submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml 2.7 and after The submit a ClusterWorkflowTemplate as a Workflow : argo submit --from clusterworkflowtemplate/cluster-workflow-template-submittable kubectl \u00b6 Using kubectl apply -f and kubectl get cwft UI \u00b6 ClusterWorkflowTemplate resources can also be managed by the UI","title":"Cluster Workflow Templates"},{"location":"cluster-workflow-templates/#cluster-workflow-templates","text":"v2.8 and after","title":"Cluster Workflow Templates"},{"location":"cluster-workflow-templates/#introduction","text":"ClusterWorkflowTemplates are cluster scoped WorkflowTemplates . ClusterWorkflowTemplate can be created cluster scoped like ClusterRole and can be accessed across all namespaces in the cluster. WorkflowTemplates documentation link","title":"Introduction"},{"location":"cluster-workflow-templates/#defining-clusterworkflowtemplate","text":"apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-whalesay-template spec : templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ]","title":"Defining ClusterWorkflowTemplate"},{"location":"cluster-workflow-templates/#referencing-other-clusterworkflowtemplates","text":"You can reference templates from other ClusterWorkflowTemplates using a templateRef field with clusterScope: true . Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate or ClusterWorkflowTemplate\" using this field name : cluster-workflow-template-whalesay-template # This is the name of the \"WorkflowTemplate or ClusterWorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference clusterScope : true # This field indicates this templateRef is pointing ClusterWorkflowTemplate arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" 2.9 and after","title":"Referencing other ClusterWorkflowTemplates"},{"location":"cluster-workflow-templates/#create-workflow-from-clusterworkflowtemplate-spec","text":"You can create Workflow from ClusterWorkflowTemplate spec using workflowTemplateRef with clusterScope: true . If you pass the arguments to created Workflow , it will be merged with cluster workflow template arguments Here is an example for ClusterWorkflowTemplate with entrypoint and arguments apiVersion : argoproj.io/v1alpha1 kind : ClusterWorkflowTemplate metadata : name : cluster-workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Here is an example for creating ClusterWorkflowTemplate as Workflow with passing entrypoint and arguments to ClusterWorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true Here is an example of a creating WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : cluster-workflow-template-hello-world- spec : workflowTemplateRef : name : cluster-workflow-template-submittable clusterScope : true","title":"Create Workflow from ClusterWorkflowTemplate Spec"},{"location":"cluster-workflow-templates/#managing-clusterworkflowtemplates","text":"","title":"Managing ClusterWorkflowTemplates"},{"location":"cluster-workflow-templates/#cli","text":"You can create some example templates as follows: argo cluster-template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/clustertemplates.yaml The submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml 2.7 and after The submit a ClusterWorkflowTemplate as a Workflow : argo submit --from clusterworkflowtemplate/cluster-workflow-template-submittable","title":"CLI"},{"location":"cluster-workflow-templates/#kubectl","text":"Using kubectl apply -f and kubectl get cwft","title":"kubectl"},{"location":"cluster-workflow-templates/#ui","text":"ClusterWorkflowTemplate resources can also be managed by the UI","title":"UI"},{"location":"conditional-artifacts-parameters/","text":"Conditional Artifacts and Parameters \u00b6 v3.1 and after You can set Step/DAG level artifacts or parameters based on an expression . Use fromExpression under a Step/DAG level output artifact and expression under a Step/DAG level output parameter. Conditional Artifacts \u00b6 - name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : artifacts : - name : result fromExpression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.artifacts.headsresult : steps.tails.outputs.artifacts.tailsresult\" Steps artifacts example DAG artifacts example Conditional Parameters \u00b6 - name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : parameters : - name : stepresult valueFrom : expression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.result : steps.tails.outputs.result\" Steps parameter example DAG parameter example Advanced example: fibonacci Sequence","title":"Conditional Artifacts and Parameters"},{"location":"conditional-artifacts-parameters/#conditional-artifacts-and-parameters","text":"v3.1 and after You can set Step/DAG level artifacts or parameters based on an expression . Use fromExpression under a Step/DAG level output artifact and expression under a Step/DAG level output parameter.","title":"Conditional Artifacts and Parameters"},{"location":"conditional-artifacts-parameters/#conditional-artifacts","text":"- name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : artifacts : - name : result fromExpression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.artifacts.headsresult : steps.tails.outputs.artifacts.tailsresult\" Steps artifacts example DAG artifacts example","title":"Conditional Artifacts"},{"location":"conditional-artifacts-parameters/#conditional-parameters","text":"- name : coinflip steps : - - name : flip-coin template : flip-coin - - name : heads template : heads when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails when : \"{{steps.flip-coin.outputs.result}} == tails\" outputs : parameters : - name : stepresult valueFrom : expression : \"steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.result : steps.tails.outputs.result\" Steps parameter example DAG parameter example Advanced example: fibonacci Sequence","title":"Conditional Parameters"},{"location":"configure-archive-logs/","text":"Configuring Archive Logs \u00b6 \u26a0\ufe0f We do not recommend you rely on Argo Workflows to archive logs. Instead, use a conventional Kubernetes logging facility. To enable automatic pipeline logging, you need to configure archiveLogs at workflow-controller config-map, workflow spec, or template level. You also need to configure Artifact Repository to define where this logging artifact is stored. Archive logs follows priorities: workflow-controller config (on) > workflow spec (on/off) > template (on/off) Controller Config Map Workflow Spec Template are we archiving logs? true true true true true true false true true false true true true false false true false true true true false true false false false false true true false false false false Configuring Workflow Controller Config Map \u00b6 See Workflow Controller Config Map Configuring Workflow Spec \u00b6 apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : archiveLogs : true entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Configuring Workflow Template \u00b6 apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] archiveLocation : archiveLogs : true","title":"Configuring Archive Logs"},{"location":"configure-archive-logs/#configuring-archive-logs","text":"\u26a0\ufe0f We do not recommend you rely on Argo Workflows to archive logs. Instead, use a conventional Kubernetes logging facility. To enable automatic pipeline logging, you need to configure archiveLogs at workflow-controller config-map, workflow spec, or template level. You also need to configure Artifact Repository to define where this logging artifact is stored. Archive logs follows priorities: workflow-controller config (on) > workflow spec (on/off) > template (on/off) Controller Config Map Workflow Spec Template are we archiving logs? true true true true true true false true true false true true true false false true false true true true false true false false false false true true false false false false","title":"Configuring Archive Logs"},{"location":"configure-archive-logs/#configuring-workflow-controller-config-map","text":"See Workflow Controller Config Map","title":"Configuring Workflow Controller Config Map"},{"location":"configure-archive-logs/#configuring-workflow-spec","text":"apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : archiveLogs : true entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ]","title":"Configuring Workflow Spec"},{"location":"configure-archive-logs/#configuring-workflow-template","text":"apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : archive-location- spec : entrypoint : whalesay templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] archiveLocation : archiveLogs : true","title":"Configuring Workflow Template"},{"location":"configure-artifact-repository/","text":"Configuring Your Artifact Repository \u00b6 To run Argo workflows that use artifacts, you must configure and use an artifact repository. Argo supports any S3 compatible artifact repository such as AWS, GCS and MinIO. This section shows how to configure the artifact repository. Subsequent sections will show how to use it. Name Inputs Outputs Garbage Collection Usage (Feb 2020) Artifactory Yes Yes No 11% Azure Blob Yes Yes Yes - GCS Yes Yes Yes - Git Yes No No - HDFS Yes Yes No 3% HTTP Yes Yes No 2% OSS Yes Yes No - Raw Yes No No 5% S3 Yes Yes Yes 86% The actual repository used by a workflow is chosen by the following rules: Anything explicitly configured using Artifact Repository Ref . This is the most flexible, safe, and secure option. From a config map named artifact-repositories if it has the workflows.argoproj.io/default-artifact-repository annotation in the workflow's namespace. From a workflow controller config-map. Configuring MinIO \u00b6 You can install MinIO into your cluster via Helm. First, install helm . Then, install MinIO with the below commands: helm repo add minio https://helm.min.io/ # official minio Helm charts helm repo update helm install argo-artifacts minio/minio --set service.type = LoadBalancer --set fullnameOverride = argo-artifacts Login to the MinIO UI using a web browser (port 9000) after obtaining the external IP using kubectl . kubectl get service argo-artifacts On Minikube: minikube service --url argo-artifacts NOTE: When MinIO is installed via Helm, it generates credentials, which you will use to login to the UI: Use the commands shown below to see the credentials AccessKey : kubectl get secret argo-artifacts -o jsonpath='{.data.accesskey}' | base64 --decode SecretKey : kubectl get secret argo-artifacts -o jsonpath='{.data.secretkey}' | base64 --decode Create a bucket named my-bucket from the MinIO UI. If MinIO is configured to use TLS you need to set the parameter insecure to false . Additionally, if MinIO is protected by certificates generated by a custom CA, you first need to save the CA certificate in a Kubernetes secret, then set the caSecret parameter accordingly. This will allow Argo to correctly verify the server certificate presented by MinIO. For example: kubectl create secret generic my-root-ca --from-file = my-ca.pem artifacts : - s3 : insecure : false caSecret : name : my-root-ca key : my-ca.pem ... Configuring AWS S3 \u00b6 Create your bucket and access keys for the bucket. AWS access keys have the same permissions as the user they are associated with. In particular, you cannot create access keys with reduced scope. If you want to limit the permissions for an access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. $ export mybucket = bucket249 $ cat > policy.json < access-key.json If you do not have Artifact Garbage Collection configured, you should remove s3:DeleteObject from the list of Actions above. NOTE: if you want argo to figure out which region your buckets belong in, you must additionally set the following statement policy. Otherwise, you must specify a bucket region in your workflow configuration. { \"Effect\" : \"Allow\" , \"Action\" :[ \"s3:GetBucketLocation\" ], \"Resource\" : \"arn:aws:s3:::*\" } ... AWS S3 IRSA \u00b6 If you wish to use S3 IRSA instead of passing in an accessKey and secretKey , you need to annotate the service account of both the running workflow (in order to save logs/artifacts) and the argo-server pod (in order to retrieve the logs/artifacts). apiVersion : v1 kind : ServiceAccount metadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::012345678901:role/mybucket name : myserviceaccount namespace : mynamespace Configuring GCS (Google Cloud Storage) \u00b6 Create a bucket from the GCP Console ( https://console.cloud.google.com/storage/browser ). There are 2 ways to configure a Google Cloud Storage. Through Native GCS APIs \u00b6 Create and download a Google Cloud service account key. Create a kubernetes secret to store the key. Configure gcs artifact as following in the yaml. artifacts : - name : message path : /tmp/message gcs : bucket : my-bucket-name key : path/in/bucket # serviceAccountKeySecret is a secret selector. # It references the k8s secret named 'my-gcs-credentials'. # This secret is expected to have have the key 'serviceAccountKey', # containing the base64 encoded credentials # to the bucket. # # If it's running on GKE and Workload Identity is used, # serviceAccountKeySecret is not needed. serviceAccountKeySecret : name : my-gcs-credentials key : serviceAccountKey If it's a GKE cluster, and Workload Identity is configured, there's no need to create the service account key and store it as a Kubernetes secret, serviceAccountKeySecret is also not needed in this case. Please follow the link to configure Workload Identity ( https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity ). Use S3 APIs \u00b6 Enable S3 compatible access and create an access key. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. Configure s3 artifact as following example. artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey Configuring Alibaba Cloud OSS (Object Storage Service) \u00b6 Create your bucket and access key for the bucket. Suggest to limit the permission for the access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. Setup Alibaba Cloud CLI and follow the steps to configure the artifact storage for your workflow: $ export mybucket = bucket-workflow-artifect $ export myregion = cn-zhangjiakou $ # limit permission to read/write the bucket. $ cat > policy.json < access-key.json $ # create secret in demo namespace, replace demo with your namespace. $ kubectl create secret generic $mybucket -credentials -n demo \\ --from-literal \"accessKey= $( cat access-key.json | jq -r .AccessKey.AccessKeyId ) \" \\ --from-literal \"secretKey= $( cat access-key.json | jq -r .AccessKey.AccessKeySecret ) \" $ # create configmap to config default artifact for a namespace. $ cat > default-artifact-repository.yaml << EOF apiVersion: v1 kind: ConfigMap metadata: # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name: artifact-repositories annotations: # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository: default-oss-artifact-repository data: default-oss-artifact-repository: | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket # accessKeySecret and secretKeySecret are secret selectors. # It references the k8s secret named 'bucket-workflow-artifect-credentials'. # This secret is expected to have have the keys 'accessKey' # and 'secretKey', containing the base64 encoded credentials # to the bucket. accessKeySecret: name: $mybucket-credentials key: accessKey secretKeySecret: name: $mybucket-credentials key: secretKey EOF # create cm in demo namespace, replace demo with your namespace. $ k apply -f default-artifact-repository.yaml -n demo You can also set createBucketIfNotPresent to true to tell the artifact driver to automatically create the OSS bucket if it doesn't exist yet when saving artifacts. Note that you'll need to set additional permission for your OSS account to create new buckets. Alibaba Cloud OSS RRSA \u00b6 If you wish to use OSS RRSA instead of passing in an accessKey and secretKey , you need to perform the following actions: Install pod-identity-webhook in your cluster to automatically inject the OIDC tokens and environment variables. Add the label pod-identity.alibabacloud.com/injection: 'on' to the target workflow namespace. Add the annotation pod-identity.alibabacloud.com/role-name: $your_ram_role_name to the service account of running workflow. Set useSDKCreds: true in your target artifact repository cm and remove the secret references to AK/SK. apiVersion : v1 kind : Namespace metadata : name : my-ns labels : pod-identity.alibabacloud.com/injection : 'on' --- apiVersion : v1 kind : ServiceAccount metadata : name : my-sa namespace : rrsa-demo annotations : pod-identity.alibabacloud.com/role-name : $your_ram_role_name --- apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : artifact-repositories annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-oss-artifact-repository data : default-oss-artifact-repository : | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket useSDKCreds: true Configuring Azure Blob Storage \u00b6 Create an Azure Storage account and a container within that account. There are a number of ways to accomplish this, including the Azure Portal or the CLI . Retrieve the blob service endpoint for the storage account. For example: az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv Retrieve the access key for the storage account. For example: az storage account keys list -n mystorageaccountname --query '[0].value' -otsv Create a kubernetes secret to hold the storage account key. For example: kubectl create secret generic my-azure-storage-credentials \\ --from-literal \"account-access-key= $( az storage account keys list -n mystorageaccountname --query '[0].value' -otsv ) \" Configure azure artifact as following in the yaml. artifacts : - name : message path : /tmp/message azure : endpoint : https://mystorageaccountname.blob.core.windows.net container : my-container-name blob : path/in/container # accountKeySecret is a secret selector. # It references the k8s secret named 'my-azure-storage-credentials'. # This secret is expected to have have the key 'account-access-key', # containing the base64 encoded credentials to the storage account. # # If a managed identity has been assigned to the machines running the # workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) # then accountKeySecret is not needed, and useSDKCreds should be # set to true instead: # useSDKCreds: true accountKeySecret : name : my-azure-storage-credentials key : account-access-key If useSDKCreds is set to true , then the accountKeySecret value is not used and authentication with Azure will be attempted using a DefaultAzureCredential instead. Configure the Default Artifact Repository \u00b6 In order for Argo to use your artifact repository, you can configure it as the default repository. Edit the workflow-controller config map with the correct endpoint and access/secret keys for your repository. S3 compatible artifact repository bucket (such as AWS, GCS, MinIO, and Alibaba Cloud OSS) \u00b6 Use the endpoint corresponding to your provider: AWS: s3.amazonaws.com GCS: storage.googleapis.com MinIO: my-minio-endpoint.default:9000 Alibaba Cloud OSS: oss-cn-hangzhou-zmf.aliyuncs.com The key is name of the object in the bucket The accessKeySecret and secretKeySecret are secret selectors that reference the specified kubernetes secret. The secret is expected to have the keys accessKey and secretKey , containing the base64 encoded credentials to the bucket. For AWS, the accessKeySecret and secretKeySecret correspond to AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY respectively. EC2 provides a meta-data API via which applications using the AWS SDK may assume IAM roles associated with the instance. If you are running argo on EC2 and the instance role allows access to your S3 bucket, you can configure the workflow step pods to assume the role. To do so, simply omit the accessKeySecret and secretKeySecret fields. For GCS, the accessKeySecret and secretKeySecret for S3 compatible access can be obtained from the GCP Console. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. For MinIO, the accessKeySecret and secretKeySecret naturally correspond the AccessKey and SecretKey . For Alibaba Cloud OSS, the accessKeySecret and secretKeySecret corresponds to accessKeyID and accessKeySecret respectively. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | s3: bucket: my-bucket keyFormat: prefix/in/bucket #optional endpoint: my-minio-endpoint.default:9000 #AWS => s3.amazonaws.com; GCS => storage.googleapis.com insecure: true #omit for S3/GCS. Needed when minio runs without TLS accessKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: accessKey secretKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: secretKey useSDKCreds: true #tells argo to use AWS SDK's default provider chain, enable for things like IRSA support The secrets are retrieved from the namespace you use to run your workflows. Note that you can specify a keyFormat . Google Cloud Storage (GCS) \u00b6 Argo also can use native GCS APIs to access a Google Cloud Storage bucket. serviceAccountKeySecret references to a Kubernetes secret which stores a Google Cloud service account key to access the bucket. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | gcs: bucket: my-bucket keyFormat: prefix/in/bucket/ {{ workflow.name }} / {{ pod.name }} #it should reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" serviceAccountKeySecret: name: my-gcs-credentials key: serviceAccountKey Azure Blob Storage \u00b6 Argo can use native Azure APIs to access a Azure Blob Storage container. accountKeySecret references to a Kubernetes secret which stores an Azure Blob Storage account shared key to access the container. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | azure: container: my-container blobNameFormat: prefix/in/container #optional, it could reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" accountKeySecret: name: my-azure-storage-credentials key: account-access-key Accessing Non-Default Artifact Repositories \u00b6 This section shows how to access artifacts from non-default artifact repositories. The endpoint , accessKeySecret and secretKeySecret are the same as for configuring the default artifact repository described previously. templates : - name : artifact-example inputs : artifacts : - name : my-input-artifact path : /my-input-artifact s3 : endpoint : s3.amazonaws.com bucket : my-aws-bucket-name key : path/in/bucket/my-input-artifact.tgz accessKeySecret : name : my-aws-s3-credentials key : accessKey secretKeySecret : name : my-aws-s3-credentials key : secretKey outputs : artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey region : my-GCS-storage-bucket-region container : image : debian:latest command : [ sh , -c ] args : [ \"cp -r /my-input-artifact /my-output-artifact\" ] Artifact Streaming \u00b6 With artifact streaming, artifacts don\u2019t need to be saved to disk first. Artifact streaming is only supported in the following artifact drivers: S3 (v3.4+), Azure Blob (v3.4+), HTTP (v3.5+), and Artifactory (v3.5+). Previously, when a user would click the button to download an artifact in the UI, the artifact would need to be written to the Argo Server\u2019s disk first before downloading. If many users tried to download simultaneously, they would take up disk space and fail the download.","title":"Configuring Your Artifact Repository"},{"location":"configure-artifact-repository/#configuring-your-artifact-repository","text":"To run Argo workflows that use artifacts, you must configure and use an artifact repository. Argo supports any S3 compatible artifact repository such as AWS, GCS and MinIO. This section shows how to configure the artifact repository. Subsequent sections will show how to use it. Name Inputs Outputs Garbage Collection Usage (Feb 2020) Artifactory Yes Yes No 11% Azure Blob Yes Yes Yes - GCS Yes Yes Yes - Git Yes No No - HDFS Yes Yes No 3% HTTP Yes Yes No 2% OSS Yes Yes No - Raw Yes No No 5% S3 Yes Yes Yes 86% The actual repository used by a workflow is chosen by the following rules: Anything explicitly configured using Artifact Repository Ref . This is the most flexible, safe, and secure option. From a config map named artifact-repositories if it has the workflows.argoproj.io/default-artifact-repository annotation in the workflow's namespace. From a workflow controller config-map.","title":"Configuring Your Artifact Repository"},{"location":"configure-artifact-repository/#configuring-minio","text":"You can install MinIO into your cluster via Helm. First, install helm . Then, install MinIO with the below commands: helm repo add minio https://helm.min.io/ # official minio Helm charts helm repo update helm install argo-artifacts minio/minio --set service.type = LoadBalancer --set fullnameOverride = argo-artifacts Login to the MinIO UI using a web browser (port 9000) after obtaining the external IP using kubectl . kubectl get service argo-artifacts On Minikube: minikube service --url argo-artifacts NOTE: When MinIO is installed via Helm, it generates credentials, which you will use to login to the UI: Use the commands shown below to see the credentials AccessKey : kubectl get secret argo-artifacts -o jsonpath='{.data.accesskey}' | base64 --decode SecretKey : kubectl get secret argo-artifacts -o jsonpath='{.data.secretkey}' | base64 --decode Create a bucket named my-bucket from the MinIO UI. If MinIO is configured to use TLS you need to set the parameter insecure to false . Additionally, if MinIO is protected by certificates generated by a custom CA, you first need to save the CA certificate in a Kubernetes secret, then set the caSecret parameter accordingly. This will allow Argo to correctly verify the server certificate presented by MinIO. For example: kubectl create secret generic my-root-ca --from-file = my-ca.pem artifacts : - s3 : insecure : false caSecret : name : my-root-ca key : my-ca.pem ...","title":"Configuring MinIO"},{"location":"configure-artifact-repository/#configuring-aws-s3","text":"Create your bucket and access keys for the bucket. AWS access keys have the same permissions as the user they are associated with. In particular, you cannot create access keys with reduced scope. If you want to limit the permissions for an access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. $ export mybucket = bucket249 $ cat > policy.json < access-key.json If you do not have Artifact Garbage Collection configured, you should remove s3:DeleteObject from the list of Actions above. NOTE: if you want argo to figure out which region your buckets belong in, you must additionally set the following statement policy. Otherwise, you must specify a bucket region in your workflow configuration. { \"Effect\" : \"Allow\" , \"Action\" :[ \"s3:GetBucketLocation\" ], \"Resource\" : \"arn:aws:s3:::*\" } ...","title":"Configuring AWS S3"},{"location":"configure-artifact-repository/#aws-s3-irsa","text":"If you wish to use S3 IRSA instead of passing in an accessKey and secretKey , you need to annotate the service account of both the running workflow (in order to save logs/artifacts) and the argo-server pod (in order to retrieve the logs/artifacts). apiVersion : v1 kind : ServiceAccount metadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::012345678901:role/mybucket name : myserviceaccount namespace : mynamespace","title":"AWS S3 IRSA"},{"location":"configure-artifact-repository/#configuring-gcs-google-cloud-storage","text":"Create a bucket from the GCP Console ( https://console.cloud.google.com/storage/browser ). There are 2 ways to configure a Google Cloud Storage.","title":"Configuring GCS (Google Cloud Storage)"},{"location":"configure-artifact-repository/#through-native-gcs-apis","text":"Create and download a Google Cloud service account key. Create a kubernetes secret to store the key. Configure gcs artifact as following in the yaml. artifacts : - name : message path : /tmp/message gcs : bucket : my-bucket-name key : path/in/bucket # serviceAccountKeySecret is a secret selector. # It references the k8s secret named 'my-gcs-credentials'. # This secret is expected to have have the key 'serviceAccountKey', # containing the base64 encoded credentials # to the bucket. # # If it's running on GKE and Workload Identity is used, # serviceAccountKeySecret is not needed. serviceAccountKeySecret : name : my-gcs-credentials key : serviceAccountKey If it's a GKE cluster, and Workload Identity is configured, there's no need to create the service account key and store it as a Kubernetes secret, serviceAccountKeySecret is also not needed in this case. Please follow the link to configure Workload Identity ( https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity ).","title":"Through Native GCS APIs"},{"location":"configure-artifact-repository/#use-s3-apis","text":"Enable S3 compatible access and create an access key. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. Configure s3 artifact as following example. artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey","title":"Use S3 APIs"},{"location":"configure-artifact-repository/#configuring-alibaba-cloud-oss-object-storage-service","text":"Create your bucket and access key for the bucket. Suggest to limit the permission for the access key, you will need to create a user with just the permissions you want to associate with the access key. Otherwise, you can just create an access key using your existing user account. Setup Alibaba Cloud CLI and follow the steps to configure the artifact storage for your workflow: $ export mybucket = bucket-workflow-artifect $ export myregion = cn-zhangjiakou $ # limit permission to read/write the bucket. $ cat > policy.json < access-key.json $ # create secret in demo namespace, replace demo with your namespace. $ kubectl create secret generic $mybucket -credentials -n demo \\ --from-literal \"accessKey= $( cat access-key.json | jq -r .AccessKey.AccessKeyId ) \" \\ --from-literal \"secretKey= $( cat access-key.json | jq -r .AccessKey.AccessKeySecret ) \" $ # create configmap to config default artifact for a namespace. $ cat > default-artifact-repository.yaml << EOF apiVersion: v1 kind: ConfigMap metadata: # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name: artifact-repositories annotations: # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository: default-oss-artifact-repository data: default-oss-artifact-repository: | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket # accessKeySecret and secretKeySecret are secret selectors. # It references the k8s secret named 'bucket-workflow-artifect-credentials'. # This secret is expected to have have the keys 'accessKey' # and 'secretKey', containing the base64 encoded credentials # to the bucket. accessKeySecret: name: $mybucket-credentials key: accessKey secretKeySecret: name: $mybucket-credentials key: secretKey EOF # create cm in demo namespace, replace demo with your namespace. $ k apply -f default-artifact-repository.yaml -n demo You can also set createBucketIfNotPresent to true to tell the artifact driver to automatically create the OSS bucket if it doesn't exist yet when saving artifacts. Note that you'll need to set additional permission for your OSS account to create new buckets.","title":"Configuring Alibaba Cloud OSS (Object Storage Service)"},{"location":"configure-artifact-repository/#alibaba-cloud-oss-rrsa","text":"If you wish to use OSS RRSA instead of passing in an accessKey and secretKey , you need to perform the following actions: Install pod-identity-webhook in your cluster to automatically inject the OIDC tokens and environment variables. Add the label pod-identity.alibabacloud.com/injection: 'on' to the target workflow namespace. Add the annotation pod-identity.alibabacloud.com/role-name: $your_ram_role_name to the service account of running workflow. Set useSDKCreds: true in your target artifact repository cm and remove the secret references to AK/SK. apiVersion : v1 kind : Namespace metadata : name : my-ns labels : pod-identity.alibabacloud.com/injection : 'on' --- apiVersion : v1 kind : ServiceAccount metadata : name : my-sa namespace : rrsa-demo annotations : pod-identity.alibabacloud.com/role-name : $your_ram_role_name --- apiVersion : v1 kind : ConfigMap metadata : # If you want to use this config map by default, name it \"artifact-repositories\". Otherwise, you can provide a reference to a # different config map in `artifactRepositoryRef.configMap`. name : artifact-repositories annotations : # v3.0 and after - if you want to use a specific key, put that key into this annotation. workflows.argoproj.io/default-artifact-repository : default-oss-artifact-repository data : default-oss-artifact-repository : | oss: endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com bucket: $mybucket useSDKCreds: true","title":"Alibaba Cloud OSS RRSA"},{"location":"configure-artifact-repository/#configuring-azure-blob-storage","text":"Create an Azure Storage account and a container within that account. There are a number of ways to accomplish this, including the Azure Portal or the CLI . Retrieve the blob service endpoint for the storage account. For example: az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv Retrieve the access key for the storage account. For example: az storage account keys list -n mystorageaccountname --query '[0].value' -otsv Create a kubernetes secret to hold the storage account key. For example: kubectl create secret generic my-azure-storage-credentials \\ --from-literal \"account-access-key= $( az storage account keys list -n mystorageaccountname --query '[0].value' -otsv ) \" Configure azure artifact as following in the yaml. artifacts : - name : message path : /tmp/message azure : endpoint : https://mystorageaccountname.blob.core.windows.net container : my-container-name blob : path/in/container # accountKeySecret is a secret selector. # It references the k8s secret named 'my-azure-storage-credentials'. # This secret is expected to have have the key 'account-access-key', # containing the base64 encoded credentials to the storage account. # # If a managed identity has been assigned to the machines running the # workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) # then accountKeySecret is not needed, and useSDKCreds should be # set to true instead: # useSDKCreds: true accountKeySecret : name : my-azure-storage-credentials key : account-access-key If useSDKCreds is set to true , then the accountKeySecret value is not used and authentication with Azure will be attempted using a DefaultAzureCredential instead.","title":"Configuring Azure Blob Storage"},{"location":"configure-artifact-repository/#configure-the-default-artifact-repository","text":"In order for Argo to use your artifact repository, you can configure it as the default repository. Edit the workflow-controller config map with the correct endpoint and access/secret keys for your repository.","title":"Configure the Default Artifact Repository"},{"location":"configure-artifact-repository/#s3-compatible-artifact-repository-bucket-such-as-aws-gcs-minio-and-alibaba-cloud-oss","text":"Use the endpoint corresponding to your provider: AWS: s3.amazonaws.com GCS: storage.googleapis.com MinIO: my-minio-endpoint.default:9000 Alibaba Cloud OSS: oss-cn-hangzhou-zmf.aliyuncs.com The key is name of the object in the bucket The accessKeySecret and secretKeySecret are secret selectors that reference the specified kubernetes secret. The secret is expected to have the keys accessKey and secretKey , containing the base64 encoded credentials to the bucket. For AWS, the accessKeySecret and secretKeySecret correspond to AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY respectively. EC2 provides a meta-data API via which applications using the AWS SDK may assume IAM roles associated with the instance. If you are running argo on EC2 and the instance role allows access to your S3 bucket, you can configure the workflow step pods to assume the role. To do so, simply omit the accessKeySecret and secretKeySecret fields. For GCS, the accessKeySecret and secretKeySecret for S3 compatible access can be obtained from the GCP Console. Note that S3 compatible access is on a per project rather than per bucket basis. Navigate to Storage > Settings ( https://console.cloud.google.com/storage/settings ). Enable interoperability access if needed. Create a new key if needed. For MinIO, the accessKeySecret and secretKeySecret naturally correspond the AccessKey and SecretKey . For Alibaba Cloud OSS, the accessKeySecret and secretKeySecret corresponds to accessKeyID and accessKeySecret respectively. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | s3: bucket: my-bucket keyFormat: prefix/in/bucket #optional endpoint: my-minio-endpoint.default:9000 #AWS => s3.amazonaws.com; GCS => storage.googleapis.com insecure: true #omit for S3/GCS. Needed when minio runs without TLS accessKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: accessKey secretKeySecret: #omit if accessing via AWS IAM name: my-minio-cred key: secretKey useSDKCreds: true #tells argo to use AWS SDK's default provider chain, enable for things like IRSA support The secrets are retrieved from the namespace you use to run your workflows. Note that you can specify a keyFormat .","title":"S3 compatible artifact repository bucket (such as AWS, GCS, MinIO, and Alibaba Cloud OSS)"},{"location":"configure-artifact-repository/#google-cloud-storage-gcs","text":"Argo also can use native GCS APIs to access a Google Cloud Storage bucket. serviceAccountKeySecret references to a Kubernetes secret which stores a Google Cloud service account key to access the bucket. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | gcs: bucket: my-bucket keyFormat: prefix/in/bucket/ {{ workflow.name }} / {{ pod.name }} #it should reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" serviceAccountKeySecret: name: my-gcs-credentials key: serviceAccountKey","title":"Google Cloud Storage (GCS)"},{"location":"configure-artifact-repository/#azure-blob-storage","text":"Argo can use native Azure APIs to access a Azure Blob Storage container. accountKeySecret references to a Kubernetes secret which stores an Azure Blob Storage account shared key to access the container. Example: $ kubectl edit configmap workflow-controller-configmap -n argo # assumes argo was installed in the argo namespace ... data: artifactRepository: | azure: container: my-container blobNameFormat: prefix/in/container #optional, it could reference workflow variables, such as \"{{workflow.name}}/{{pod.name}}\" accountKeySecret: name: my-azure-storage-credentials key: account-access-key","title":"Azure Blob Storage"},{"location":"configure-artifact-repository/#accessing-non-default-artifact-repositories","text":"This section shows how to access artifacts from non-default artifact repositories. The endpoint , accessKeySecret and secretKeySecret are the same as for configuring the default artifact repository described previously. templates : - name : artifact-example inputs : artifacts : - name : my-input-artifact path : /my-input-artifact s3 : endpoint : s3.amazonaws.com bucket : my-aws-bucket-name key : path/in/bucket/my-input-artifact.tgz accessKeySecret : name : my-aws-s3-credentials key : accessKey secretKeySecret : name : my-aws-s3-credentials key : secretKey outputs : artifacts : - name : my-output-artifact path : /my-output-artifact s3 : endpoint : storage.googleapis.com bucket : my-gcs-bucket-name # NOTE that, by default, all output artifacts are automatically tarred and # gzipped before saving. So as a best practice, .tgz or .tar.gz # should be incorporated into the key name so the resulting file # has an accurate file extension. key : path/in/bucket/my-output-artifact.tgz accessKeySecret : name : my-gcs-s3-credentials key : accessKey secretKeySecret : name : my-gcs-s3-credentials key : secretKey region : my-GCS-storage-bucket-region container : image : debian:latest command : [ sh , -c ] args : [ \"cp -r /my-input-artifact /my-output-artifact\" ]","title":"Accessing Non-Default Artifact Repositories"},{"location":"configure-artifact-repository/#artifact-streaming","text":"With artifact streaming, artifacts don\u2019t need to be saved to disk first. Artifact streaming is only supported in the following artifact drivers: S3 (v3.4+), Azure Blob (v3.4+), HTTP (v3.5+), and Artifactory (v3.5+). Previously, when a user would click the button to download an artifact in the UI, the artifact would need to be written to the Argo Server\u2019s disk first before downloading. If many users tried to download simultaneously, they would take up disk space and fail the download.","title":"Artifact Streaming"},{"location":"container-set-template/","text":"Container Set Template \u00b6 v3.1 and after A container set templates is similar to a normal container or script template, but allows you to specify multiple containers to run within a single pod. Because you have multiple containers within a pod, they will be scheduled on the same host. You can use cheap and fast empty-dir volumes instead of persistent volume claims to share data between steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : container-set-template- spec : entrypoint : main templates : - name : main volumes : - name : workspace emptyDir : { } containerSet : volumeMounts : - mountPath : /workspace name : workspace containers : - name : a image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'a: hello world' >> /workspace/message\" ] - name : b image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'b: hello world' >> /workspace/message\" ] - name : main image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'main: hello world' >> /workspace/message\" ] dependencies : - a - b outputs : parameters : - name : message valueFrom : path : /workspace/message There are a couple of caveats: You must use the Emissary Executor . Or all containers must run in parallel - i.e. it is a graph with no dependencies. You cannot use enhanced depends logic . It will use the sum total of all resource requests, maybe costing more than the same DAG template. This will be a problem if your requests already cost a lot. See below. The containers can be arranged as a graph by specifying dependencies. This is suitable for running 10s rather than 100s of containers. Inputs and Outputs \u00b6 As with the container and script templates, inputs and outputs can only be loaded and saved from a container named main . All container set templates that have artifacts must/should have a container named main . If you want to use base-layer artifacts, main must be last to finish, so it must be the root node in the graph. That is may not be practical. Instead, have a workspace volume and make sure all artifacts paths are on that volume. \u26a0\ufe0f Resource Requests \u00b6 A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consuming resources and you're still getting billed for them. If your requests are small, this won't be a problem. If your requests are large, set the resource requests so the sum total is the most you'll need at once. Example A: a simple sequence e.g. a -> b -> c a needs 1Gi memory b needs 2Gi memory c needs 1Gi memory Then you know you need only a maximum of 2Gi. You could set as follows: a requests 512Mi memory b requests 1Gi memory c requests 512Mi memory The total is 2Gi, which is enough for b . We're all good. Example B: Diamond DAG e.g. a diamond a -> b -> d and a -> c -> d , i.e. b and c run at the same time. a needs 1000 cpu b needs 2000 cpu c needs 1000 cpu d needs 1000 cpu I know that b and c will run at the same time. So I need to make sure the total is 3000. a requests 500 cpu b requests 1000 cpu c requests 1000 cpu d requests 500 cpu The total is 3000, which is enough for b + c . We're all good. Example B: Lopsided requests, e.g. a -> b where a is cheap and b is expensive a needs 100 cpu, 1Mi memory, runs for 10h b needs 8Ki GPU, 100 Gi memory, 200 Ki GPU, runs for 5m Can you see the problem here? a only has small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive. Solution: do not use container set when you have lopsided requests.","title":"Container Set Template"},{"location":"container-set-template/#container-set-template","text":"v3.1 and after A container set templates is similar to a normal container or script template, but allows you to specify multiple containers to run within a single pod. Because you have multiple containers within a pod, they will be scheduled on the same host. You can use cheap and fast empty-dir volumes instead of persistent volume claims to share data between steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : container-set-template- spec : entrypoint : main templates : - name : main volumes : - name : workspace emptyDir : { } containerSet : volumeMounts : - mountPath : /workspace name : workspace containers : - name : a image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'a: hello world' >> /workspace/message\" ] - name : b image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'b: hello world' >> /workspace/message\" ] - name : main image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"echo 'main: hello world' >> /workspace/message\" ] dependencies : - a - b outputs : parameters : - name : message valueFrom : path : /workspace/message There are a couple of caveats: You must use the Emissary Executor . Or all containers must run in parallel - i.e. it is a graph with no dependencies. You cannot use enhanced depends logic . It will use the sum total of all resource requests, maybe costing more than the same DAG template. This will be a problem if your requests already cost a lot. See below. The containers can be arranged as a graph by specifying dependencies. This is suitable for running 10s rather than 100s of containers.","title":"Container Set Template"},{"location":"container-set-template/#inputs-and-outputs","text":"As with the container and script templates, inputs and outputs can only be loaded and saved from a container named main . All container set templates that have artifacts must/should have a container named main . If you want to use base-layer artifacts, main must be last to finish, so it must be the root node in the graph. That is may not be practical. Instead, have a workspace volume and make sure all artifacts paths are on that volume.","title":"Inputs and Outputs"},{"location":"container-set-template/#resource-requests","text":"A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consuming resources and you're still getting billed for them. If your requests are small, this won't be a problem. If your requests are large, set the resource requests so the sum total is the most you'll need at once. Example A: a simple sequence e.g. a -> b -> c a needs 1Gi memory b needs 2Gi memory c needs 1Gi memory Then you know you need only a maximum of 2Gi. You could set as follows: a requests 512Mi memory b requests 1Gi memory c requests 512Mi memory The total is 2Gi, which is enough for b . We're all good. Example B: Diamond DAG e.g. a diamond a -> b -> d and a -> c -> d , i.e. b and c run at the same time. a needs 1000 cpu b needs 2000 cpu c needs 1000 cpu d needs 1000 cpu I know that b and c will run at the same time. So I need to make sure the total is 3000. a requests 500 cpu b requests 1000 cpu c requests 1000 cpu d requests 500 cpu The total is 3000, which is enough for b + c . We're all good. Example B: Lopsided requests, e.g. a -> b where a is cheap and b is expensive a needs 100 cpu, 1Mi memory, runs for 10h b needs 8Ki GPU, 100 Gi memory, 200 Ki GPU, runs for 5m Can you see the problem here? a only has small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive. Solution: do not use container set when you have lopsided requests.","title":"\u26a0\ufe0f Resource Requests"},{"location":"cost-optimisation/","text":"Cost Optimization \u00b6 User Cost Optimizations \u00b6 Suggestions for users running workflows. Set The Workflows Pod Resource Requests \u00b6 Suitable if you are running a workflow with many homogeneous pods. Resource duration shows the amount of CPU and memory requested by a pod and is indicative of the cost. You can use this to find costly steps within your workflow. Smaller requests can be set in the pod spec patch's resource requirements . Use A Node Selector To Use Cheaper Instances \u00b6 You can use a node selector for cheaper instances, e.g. spot instances: nodeSelector : \"node-role.kubernetes.io/argo-spot-worker\" : \"true\" Consider trying Volume Claim Templates or Volumes instead of Artifacts \u00b6 Suitable if you have a workflow that passes a lot of artifacts within itself. Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on what your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage). On the other hand if you are using a NFS volume shared between all your workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage. Consider: Data transfer costs (upload/download vs. copying) Data storage costs (object storage vs. volume) Requirement for parallel access to data (NFS vs. block storage vs. artifact) When using volume claims, consider configuring Volume Claim GC . By default, claims are only deleted when a workflow is successful. Limit The Total Number Of Workflows And Pods \u00b6 Suitable for all. A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as it exists in your cluster, even after it's no longer running. The workflow controller memory and CPU needs to increase linearly with the number of pods and workflows you are currently running. You should delete workflows once they are no longer needed. You can enable the Workflow Archive to continue viewing them after they are removed from Kubernetes. Limit the total number of workflows using: Active Deadline Seconds - terminate running workflows that do not complete in a set time. This will make sure workflows do not run forever. Workflow TTL Strategy - delete completed workflows after a set time. Pod GC - delete completed pods. By default, Pods are not deleted. CronWorkflow history limits - delete successful or failed workflows which exceed the limit. Example spec : # must complete in 8h (28,800 seconds) activeDeadlineSeconds : 28800 # keep workflows for 1d (86,400 seconds) ttlStrategy : secondsAfterCompletion : 86400 # delete all pods as soon as they complete podGC : strategy : OnPodCompletion You can set these configurations globally using Default Workflow Spec . Changing these settings will not delete workflows that have already run. To list old workflows: argo list --completed --since 7d v2.9 and after To list/delete workflows completed over 7 days ago: argo list --older 7d argo delete --older 7d Operator Cost Optimizations \u00b6 Suggestions for operators who installed Argo Workflows. Set Resources Requests and Limits \u00b6 Suitable if you have many instances, e.g. on dozens of clusters or namespaces. Set resource requests and limits for the workflow-controller and argo-server , e.g. requests : cpu : 100m memory : 64Mi limits : cpu : 500m memory : 128Mi This above limit is suitable for the Argo Server, as this is stateless. The Workflow Controller is stateful and will scale to the number of live workflows - so you are likely to need higher values. Configure Executor Resource Requests \u00b6 Suitable for all - unless you have large artifacts. Configure workflow-controller-configmap.yaml to set the executor.resources : executor : | resources: requests: cpu: 100m memory: 64Mi limits: cpu: 500m memory: 512Mi The correct values depend on the size of artifacts your workflows download. For artifacts > 10GB, memory usage may be large - #1322 .","title":"Cost Optimization"},{"location":"cost-optimisation/#cost-optimization","text":"","title":"Cost Optimization"},{"location":"cost-optimisation/#user-cost-optimizations","text":"Suggestions for users running workflows.","title":"User Cost Optimizations"},{"location":"cost-optimisation/#set-the-workflows-pod-resource-requests","text":"Suitable if you are running a workflow with many homogeneous pods. Resource duration shows the amount of CPU and memory requested by a pod and is indicative of the cost. You can use this to find costly steps within your workflow. Smaller requests can be set in the pod spec patch's resource requirements .","title":"Set The Workflows Pod Resource Requests"},{"location":"cost-optimisation/#use-a-node-selector-to-use-cheaper-instances","text":"You can use a node selector for cheaper instances, e.g. spot instances: nodeSelector : \"node-role.kubernetes.io/argo-spot-worker\" : \"true\"","title":"Use A Node Selector To Use Cheaper Instances"},{"location":"cost-optimisation/#consider-trying-volume-claim-templates-or-volumes-instead-of-artifacts","text":"Suitable if you have a workflow that passes a lot of artifacts within itself. Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on what your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage). On the other hand if you are using a NFS volume shared between all your workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage. Consider: Data transfer costs (upload/download vs. copying) Data storage costs (object storage vs. volume) Requirement for parallel access to data (NFS vs. block storage vs. artifact) When using volume claims, consider configuring Volume Claim GC . By default, claims are only deleted when a workflow is successful.","title":"Consider trying Volume Claim Templates or Volumes instead of Artifacts"},{"location":"cost-optimisation/#limit-the-total-number-of-workflows-and-pods","text":"Suitable for all. A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as it exists in your cluster, even after it's no longer running. The workflow controller memory and CPU needs to increase linearly with the number of pods and workflows you are currently running. You should delete workflows once they are no longer needed. You can enable the Workflow Archive to continue viewing them after they are removed from Kubernetes. Limit the total number of workflows using: Active Deadline Seconds - terminate running workflows that do not complete in a set time. This will make sure workflows do not run forever. Workflow TTL Strategy - delete completed workflows after a set time. Pod GC - delete completed pods. By default, Pods are not deleted. CronWorkflow history limits - delete successful or failed workflows which exceed the limit. Example spec : # must complete in 8h (28,800 seconds) activeDeadlineSeconds : 28800 # keep workflows for 1d (86,400 seconds) ttlStrategy : secondsAfterCompletion : 86400 # delete all pods as soon as they complete podGC : strategy : OnPodCompletion You can set these configurations globally using Default Workflow Spec . Changing these settings will not delete workflows that have already run. To list old workflows: argo list --completed --since 7d v2.9 and after To list/delete workflows completed over 7 days ago: argo list --older 7d argo delete --older 7d","title":"Limit The Total Number Of Workflows And Pods"},{"location":"cost-optimisation/#operator-cost-optimizations","text":"Suggestions for operators who installed Argo Workflows.","title":"Operator Cost Optimizations"},{"location":"cost-optimisation/#set-resources-requests-and-limits","text":"Suitable if you have many instances, e.g. on dozens of clusters or namespaces. Set resource requests and limits for the workflow-controller and argo-server , e.g. requests : cpu : 100m memory : 64Mi limits : cpu : 500m memory : 128Mi This above limit is suitable for the Argo Server, as this is stateless. The Workflow Controller is stateful and will scale to the number of live workflows - so you are likely to need higher values.","title":"Set Resources Requests and Limits"},{"location":"cost-optimisation/#configure-executor-resource-requests","text":"Suitable for all - unless you have large artifacts. Configure workflow-controller-configmap.yaml to set the executor.resources : executor : | resources: requests: cpu: 100m memory: 64Mi limits: cpu: 500m memory: 512Mi The correct values depend on the size of artifacts your workflows download. For artifacts > 10GB, memory usage may be large - #1322 .","title":"Configure Executor Resource Requests"},{"location":"cron-backfill/","text":"Cron Backfill \u00b6 Use Case \u00b6 You are using cron workflows to run daily jobs, you may need to re-run for a date, or run some historical days. Solution \u00b6 Create a workflow template for your daily job. Create your cron workflow to run daily and invoke that template. Create a backfill workflow that uses withSequence to run the job for each date. This full example contains: A workflow template named job . A cron workflow named daily-job . A workflow named backfill-v1 that uses a resource template to create one workflow for each backfill date. A alternative workflow named backfill-v2 that uses a steps templates to run one task for each backfill date.","title":"Cron Backfill"},{"location":"cron-backfill/#cron-backfill","text":"","title":"Cron Backfill"},{"location":"cron-backfill/#use-case","text":"You are using cron workflows to run daily jobs, you may need to re-run for a date, or run some historical days.","title":"Use Case"},{"location":"cron-backfill/#solution","text":"Create a workflow template for your daily job. Create your cron workflow to run daily and invoke that template. Create a backfill workflow that uses withSequence to run the job for each date. This full example contains: A workflow template named job . A cron workflow named daily-job . A workflow named backfill-v1 that uses a resource template to create one workflow for each backfill date. A alternative workflow named backfill-v2 that uses a steps templates to run one task for each backfill date.","title":"Solution"},{"location":"cron-workflows/","text":"Cron Workflows \u00b6 v2.5 and after Introduction \u00b6 CronWorkflow are workflows that run on a preset schedule. They are designed to be converted from Workflow easily and to mimic the same options as Kubernetes CronJob . In essence, CronWorkflow = Workflow + some specific cron options. CronWorkflow Spec \u00b6 An example CronWorkflow spec would look like: apiVersion : argoproj.io/v1alpha1 kind : CronWorkflow metadata : name : test-cron-wf spec : schedule : \"* * * * *\" concurrencyPolicy : \"Replace\" startingDeadlineSeconds : 0 workflowSpec : entrypoint : whalesay templates : - name : whalesay container : image : alpine:3.6 command : [ sh , -c ] args : [ \"date; sleep 90\" ] workflowSpec and workflowMetadata \u00b6 CronWorkflow.spec.workflowSpec is the same type as Workflow.spec and serves as a template for Workflow objects that are created from it. Everything under this spec will be converted to a Workflow . The resulting Workflow name will be a generated name based on the CronWorkflow name. In this example it could be something like test-cron-wf-tj6fe . CronWorkflow.spec.workflowMetadata can be used to add labels and annotations . CronWorkflow Options \u00b6 Option Name Default Value Description schedule None, must be provided Schedule at which the Workflow will be run. E.g. 5 4 * * * timezone Machine timezone Timezone during which the Workflow will be run from the IANA timezone standard, e.g. America/Los_Angeles suspend false If true Workflow scheduling will not occur. Can be set from the CLI, GitOps, or directly concurrencyPolicy Allow Policy that determines what to do if multiple Workflows are scheduled at the same time. Available options: Allow : allow all, Replace : remove all old before scheduling a new, Forbid : do not allow any new while there are old startingDeadlineSeconds 0 Number of seconds after the last successful run during which a missed Workflow will be run successfulJobsHistoryLimit 3 Number of successful Workflows that will be persisted at a time failedJobsHistoryLimit 1 Number of failed Workflows that will be persisted at a time Cron Schedule Syntax \u00b6 The cron scheduler uses the standard cron syntax, as documented on Wikipedia . More detailed documentation for the specific library used is documented here . Crash Recovery \u00b6 If the workflow-controller crashes (and hence the CronWorkflow controller), there are some options you can set to ensure that CronWorkflows that would have been scheduled while the controller was down can still run. Mainly startingDeadlineSeconds can be set to specify the maximum number of seconds past the last successful run of a CronWorkflow during which a missed run will still be executed. For example, if a CronWorkflow that runs every minute is last run at 12:05:00, and the controller crashes between 12:05:55 and 12:06:05, then the expected execution time of 12:06:00 would be missed. However, if startingDeadlineSeconds is set to a value greater than 65 (the amount of time passing between the last scheduled run time of 12:05:00 and the current controller restart time of 12:06:05), then a single instance of the CronWorkflow will be executed exactly at 12:06:05. Currently only a single instance will be executed as a result of setting startingDeadlineSeconds . This setting can also be configured in tandem with concurrencyPolicy to achieve more fine-tuned control. Daylight Saving \u00b6 Daylight Saving (DST) is taken into account when using timezone. This means that, depending on the local time of the scheduled job, argo will schedule the workflow once, twice, or not at all when the clock moves forward or back. For example, with timezone set at America/Los_Angeles , we have daylight saving +1 hour (DST start) at 2020-03-08 02:00:00: Note: The schedules between 02:00 a.m. to 02:59 a.m. were skipped on Mar 8th due to the clock being moved forward: cron sequence workflow execution time 59 1 ** * 1 2020-03-08 01:59:00 -0800 PST 2 2020-03-09 01:59:00 -0700 PDT 3 2020-03-10 01:59:00 -0700 PDT 0 2 ** * 1 2020-03-09 02:00:00 -0700 PDT 2 2020-03-10 02:00:00 -0700 PDT 3 2020-03-11 02:00:00 -0700 PDT 1 2 ** * 1 2020-03-09 02:01:00 -0700 PDT 2 2020-03-10 02:01:00 -0700 PDT 3 2020-03-11 02:01:00 -0700 PDT -1 hour (DST end) at 2020-11-01 02:00:00: Note: the schedules between 01:00 a.m. to 01:59 a.m. were triggered twice on Nov 1st due to the clock being set back: cron sequence workflow execution time 59 1 ** * 1 2020-11-01 01:59:00 -0700 PDT 2 2020-11-01 01:59:00 -0800 PST 3 2020-11-02 01:59:00 -0800 PST 0 2 ** * 1 2020-11-01 02:00:00 -0800 PST 2 2020-11-02 02:00:00 -0800 PST 3 2020-11-03 02:00:00 -0800 PST 1 2 ** * 1 2020-11-01 02:01:00 -0800 PST 2 2020-11-02 02:01:00 -0800 PST 3 2020-11-03 02:01:00 -0800 PST Managing CronWorkflow \u00b6 CLI \u00b6 CronWorkflow can be created from the CLI by using basic commands: $ argo cron create cron.yaml Name: test-cron-wf Namespace: argo Created: Mon Nov 18 10 :17:06 -0800 ( now ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Forbid $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 49s N/A * * * * * false # some time passes $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 56s 2s * * * * * false $ argo cron get test-cron-wf Name: test-cron-wf Namespace: argo Created: Wed Oct 28 07 :19:02 -0600 ( 23 hours ago ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Replace LastScheduledTime: Thu Oct 29 06 :51:00 -0600 ( 11 minutes ago ) NextScheduledTime: Thu Oct 29 13 :03:00 +0000 ( 32 seconds from now ) Active Workflows: test-cron-wf-rt4nf Note : NextScheduledRun assumes that the workflow-controller uses UTC as its timezone kubectl \u00b6 Using kubectl apply -f and kubectl get cwf Back-Filling Days \u00b6 See cron backfill . GitOps via Argo CD \u00b6 CronWorkflow resources can be managed with GitOps by using Argo CD UI \u00b6 CronWorkflow resources can also be managed by the UI","title":"Cron Workflows"},{"location":"cron-workflows/#cron-workflows","text":"v2.5 and after","title":"Cron Workflows"},{"location":"cron-workflows/#introduction","text":"CronWorkflow are workflows that run on a preset schedule. They are designed to be converted from Workflow easily and to mimic the same options as Kubernetes CronJob . In essence, CronWorkflow = Workflow + some specific cron options.","title":"Introduction"},{"location":"cron-workflows/#cronworkflow-spec","text":"An example CronWorkflow spec would look like: apiVersion : argoproj.io/v1alpha1 kind : CronWorkflow metadata : name : test-cron-wf spec : schedule : \"* * * * *\" concurrencyPolicy : \"Replace\" startingDeadlineSeconds : 0 workflowSpec : entrypoint : whalesay templates : - name : whalesay container : image : alpine:3.6 command : [ sh , -c ] args : [ \"date; sleep 90\" ]","title":"CronWorkflow Spec"},{"location":"cron-workflows/#workflowspec-and-workflowmetadata","text":"CronWorkflow.spec.workflowSpec is the same type as Workflow.spec and serves as a template for Workflow objects that are created from it. Everything under this spec will be converted to a Workflow . The resulting Workflow name will be a generated name based on the CronWorkflow name. In this example it could be something like test-cron-wf-tj6fe . CronWorkflow.spec.workflowMetadata can be used to add labels and annotations .","title":"workflowSpec and workflowMetadata"},{"location":"cron-workflows/#cronworkflow-options","text":"Option Name Default Value Description schedule None, must be provided Schedule at which the Workflow will be run. E.g. 5 4 * * * timezone Machine timezone Timezone during which the Workflow will be run from the IANA timezone standard, e.g. America/Los_Angeles suspend false If true Workflow scheduling will not occur. Can be set from the CLI, GitOps, or directly concurrencyPolicy Allow Policy that determines what to do if multiple Workflows are scheduled at the same time. Available options: Allow : allow all, Replace : remove all old before scheduling a new, Forbid : do not allow any new while there are old startingDeadlineSeconds 0 Number of seconds after the last successful run during which a missed Workflow will be run successfulJobsHistoryLimit 3 Number of successful Workflows that will be persisted at a time failedJobsHistoryLimit 1 Number of failed Workflows that will be persisted at a time","title":"CronWorkflow Options"},{"location":"cron-workflows/#cron-schedule-syntax","text":"The cron scheduler uses the standard cron syntax, as documented on Wikipedia . More detailed documentation for the specific library used is documented here .","title":"Cron Schedule Syntax"},{"location":"cron-workflows/#crash-recovery","text":"If the workflow-controller crashes (and hence the CronWorkflow controller), there are some options you can set to ensure that CronWorkflows that would have been scheduled while the controller was down can still run. Mainly startingDeadlineSeconds can be set to specify the maximum number of seconds past the last successful run of a CronWorkflow during which a missed run will still be executed. For example, if a CronWorkflow that runs every minute is last run at 12:05:00, and the controller crashes between 12:05:55 and 12:06:05, then the expected execution time of 12:06:00 would be missed. However, if startingDeadlineSeconds is set to a value greater than 65 (the amount of time passing between the last scheduled run time of 12:05:00 and the current controller restart time of 12:06:05), then a single instance of the CronWorkflow will be executed exactly at 12:06:05. Currently only a single instance will be executed as a result of setting startingDeadlineSeconds . This setting can also be configured in tandem with concurrencyPolicy to achieve more fine-tuned control.","title":"Crash Recovery"},{"location":"cron-workflows/#daylight-saving","text":"Daylight Saving (DST) is taken into account when using timezone. This means that, depending on the local time of the scheduled job, argo will schedule the workflow once, twice, or not at all when the clock moves forward or back. For example, with timezone set at America/Los_Angeles , we have daylight saving +1 hour (DST start) at 2020-03-08 02:00:00: Note: The schedules between 02:00 a.m. to 02:59 a.m. were skipped on Mar 8th due to the clock being moved forward: cron sequence workflow execution time 59 1 ** * 1 2020-03-08 01:59:00 -0800 PST 2 2020-03-09 01:59:00 -0700 PDT 3 2020-03-10 01:59:00 -0700 PDT 0 2 ** * 1 2020-03-09 02:00:00 -0700 PDT 2 2020-03-10 02:00:00 -0700 PDT 3 2020-03-11 02:00:00 -0700 PDT 1 2 ** * 1 2020-03-09 02:01:00 -0700 PDT 2 2020-03-10 02:01:00 -0700 PDT 3 2020-03-11 02:01:00 -0700 PDT -1 hour (DST end) at 2020-11-01 02:00:00: Note: the schedules between 01:00 a.m. to 01:59 a.m. were triggered twice on Nov 1st due to the clock being set back: cron sequence workflow execution time 59 1 ** * 1 2020-11-01 01:59:00 -0700 PDT 2 2020-11-01 01:59:00 -0800 PST 3 2020-11-02 01:59:00 -0800 PST 0 2 ** * 1 2020-11-01 02:00:00 -0800 PST 2 2020-11-02 02:00:00 -0800 PST 3 2020-11-03 02:00:00 -0800 PST 1 2 ** * 1 2020-11-01 02:01:00 -0800 PST 2 2020-11-02 02:01:00 -0800 PST 3 2020-11-03 02:01:00 -0800 PST","title":"Daylight Saving"},{"location":"cron-workflows/#managing-cronworkflow","text":"","title":"Managing CronWorkflow"},{"location":"cron-workflows/#cli","text":"CronWorkflow can be created from the CLI by using basic commands: $ argo cron create cron.yaml Name: test-cron-wf Namespace: argo Created: Mon Nov 18 10 :17:06 -0800 ( now ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Forbid $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 49s N/A * * * * * false # some time passes $ argo cron list NAME AGE LAST RUN SCHEDULE SUSPENDED test-cron-wf 56s 2s * * * * * false $ argo cron get test-cron-wf Name: test-cron-wf Namespace: argo Created: Wed Oct 28 07 :19:02 -0600 ( 23 hours ago ) Schedule: * * * * * Suspended: false StartingDeadlineSeconds: 0 ConcurrencyPolicy: Replace LastScheduledTime: Thu Oct 29 06 :51:00 -0600 ( 11 minutes ago ) NextScheduledTime: Thu Oct 29 13 :03:00 +0000 ( 32 seconds from now ) Active Workflows: test-cron-wf-rt4nf Note : NextScheduledRun assumes that the workflow-controller uses UTC as its timezone","title":"CLI"},{"location":"cron-workflows/#kubectl","text":"Using kubectl apply -f and kubectl get cwf","title":"kubectl"},{"location":"cron-workflows/#back-filling-days","text":"See cron backfill .","title":"Back-Filling Days"},{"location":"cron-workflows/#gitops-via-argo-cd","text":"CronWorkflow resources can be managed with GitOps by using Argo CD","title":"GitOps via Argo CD"},{"location":"cron-workflows/#ui","text":"CronWorkflow resources can also be managed by the UI","title":"UI"},{"location":"data-sourcing-and-transformation/","text":"Data Sourcing and Transformations \u00b6 v3.1 and after We have intentionally made this feature available with only bare-bones functionality. Our hope is that we are able to build this feature with our community's feedback. If you have ideas and use cases for this feature, please open an enhancement proposal on GitHub. Additionally, please take a look at our current ideas at the bottom of this document. Introduction \u00b6 Users often source and transform data as part of their workflows. The data template provides first-class support for these common operations. data templates can best be understood by looking at a common data sourcing and transformation operation in bash : find -r . | grep \".pdf\" | sed \"s/foo/foo.ready/\" Such operations consist of two main parts: A \"source\" of data: find -r . A series of \"transformations\" which transform the output of the source serially: | grep \".pdf\" | sed \"s/foo/foo.ready/\" This operation, for example, could be useful in sourcing a potential list of files to be processed and filtering and manipulating the list as desired. In Argo, this operation would be written as: - name : generate-artifacts data : source : # Define a source for the data, only a single \"source\" is permitted artifactPaths : # A predefined source: Generate a list of all artifact paths in a given repository s3 : # Source from an S3 bucket bucket : test endpoint : minio:9000 insecure : true accessKeySecret : name : my-minio-cred key : accesskey secretKeySecret : name : my-minio-cred key : secretkey transformation : # The source is then passed to be transformed by transformations defined here - expression : \"filter(data, {# endsWith \\\".pdf\\\"})\" - expression : \"map(data, {# + \\\".ready\\\"})\" Spec \u00b6 A data template must always contain a source . Current available sources: artifactPaths : generates a list of artifact paths from the artifact repository specified A data template may contain any number of transformations (or zero). The transformations will be applied serially in order. Current available transformations: expression : an expr expression. See language definition here . When defining expr expressions Argo will pass the available data to the environment as a variable called data (see example above). We understand that the expression transformation is limited. We intend to greatly expand the functionality of this template with our community's feedback. Please see the link at the top of this document to submit ideas or use cases for this feature.","title":"Data Sourcing and Transformations"},{"location":"data-sourcing-and-transformation/#data-sourcing-and-transformations","text":"v3.1 and after We have intentionally made this feature available with only bare-bones functionality. Our hope is that we are able to build this feature with our community's feedback. If you have ideas and use cases for this feature, please open an enhancement proposal on GitHub. Additionally, please take a look at our current ideas at the bottom of this document.","title":"Data Sourcing and Transformations"},{"location":"data-sourcing-and-transformation/#introduction","text":"Users often source and transform data as part of their workflows. The data template provides first-class support for these common operations. data templates can best be understood by looking at a common data sourcing and transformation operation in bash : find -r . | grep \".pdf\" | sed \"s/foo/foo.ready/\" Such operations consist of two main parts: A \"source\" of data: find -r . A series of \"transformations\" which transform the output of the source serially: | grep \".pdf\" | sed \"s/foo/foo.ready/\" This operation, for example, could be useful in sourcing a potential list of files to be processed and filtering and manipulating the list as desired. In Argo, this operation would be written as: - name : generate-artifacts data : source : # Define a source for the data, only a single \"source\" is permitted artifactPaths : # A predefined source: Generate a list of all artifact paths in a given repository s3 : # Source from an S3 bucket bucket : test endpoint : minio:9000 insecure : true accessKeySecret : name : my-minio-cred key : accesskey secretKeySecret : name : my-minio-cred key : secretkey transformation : # The source is then passed to be transformed by transformations defined here - expression : \"filter(data, {# endsWith \\\".pdf\\\"})\" - expression : \"map(data, {# + \\\".ready\\\"})\"","title":"Introduction"},{"location":"data-sourcing-and-transformation/#spec","text":"A data template must always contain a source . Current available sources: artifactPaths : generates a list of artifact paths from the artifact repository specified A data template may contain any number of transformations (or zero). The transformations will be applied serially in order. Current available transformations: expression : an expr expression. See language definition here . When defining expr expressions Argo will pass the available data to the environment as a variable called data (see example above). We understand that the expression transformation is limited. We intend to greatly expand the functionality of this template with our community's feedback. Please see the link at the top of this document to submit ideas or use cases for this feature.","title":"Spec"},{"location":"debug-pause/","text":"Debug Pause \u00b6 v3.3 and after Introduction \u00b6 The debug pause feature makes it possible to pause individual workflow steps for debugging before, after or both and then release the steps from the paused state. Currently this feature is only supported when using the Emissary Executor In order to pause a container env variables are used: ARGO_DEBUG_PAUSE_AFTER - to pause a step after execution ARGO_DEBUG_PAUSE_BEFORE - to pause a step before execution Example workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' In order to release a step from a pause state, marker files are used named /var/run/argo/ctr/main/after or /var/run/argo/ctr/main/before corresponding to when the step is paused. Pausing steps can be used together with ephemeral containers when a shell is not available in the used container. Example \u00b6 1) Create a workflow where the debug pause env in set, in this example ARGO_DEBUG_PAUSE_AFTER will be set and thus the step will be paused after execution of the user code. pause-after.yaml apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' argo submit -n argo --watch pause-after.yaml Create a shell in the container of interest of create a ephemeral container in the pod, in this example ephemeral containers are used. kubectl debug -n argo -it POD_NAME --image = busybox --target = main --share-processes In order to have access to the persistence volume used by the workflow step, --share-processes will have to be used. The ephemeral container can be used to perform debugging operations. When debugging has been completed, create the marker file to allow the workflow step to continue. When using process name space sharing container file systems are visible to other containers in the pod through the /proc/$pid/root link. touch /proc/1/root/run/argo/ctr/main/after","title":"Debug Pause"},{"location":"debug-pause/#debug-pause","text":"v3.3 and after","title":"Debug Pause"},{"location":"debug-pause/#introduction","text":"The debug pause feature makes it possible to pause individual workflow steps for debugging before, after or both and then release the steps from the paused state. Currently this feature is only supported when using the Emissary Executor In order to pause a container env variables are used: ARGO_DEBUG_PAUSE_AFTER - to pause a step after execution ARGO_DEBUG_PAUSE_BEFORE - to pause a step before execution Example workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' In order to release a step from a pause state, marker files are used named /var/run/argo/ctr/main/after or /var/run/argo/ctr/main/before corresponding to when the step is paused. Pausing steps can be used together with ephemeral containers when a shell is not available in the used container.","title":"Introduction"},{"location":"debug-pause/#example","text":"1) Create a workflow where the debug pause env in set, in this example ARGO_DEBUG_PAUSE_AFTER will be set and thus the step will be paused after execution of the user code. pause-after.yaml apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : pause-after- spec : entrypoint : whalesay templates : - name : whalesay container : image : argoproj/argosay:v2 env : - name : ARGO_DEBUG_PAUSE_AFTER value : 'true' argo submit -n argo --watch pause-after.yaml Create a shell in the container of interest of create a ephemeral container in the pod, in this example ephemeral containers are used. kubectl debug -n argo -it POD_NAME --image = busybox --target = main --share-processes In order to have access to the persistence volume used by the workflow step, --share-processes will have to be used. The ephemeral container can be used to perform debugging operations. When debugging has been completed, create the marker file to allow the workflow step to continue. When using process name space sharing container file systems are visible to other containers in the pod through the /proc/$pid/root link. touch /proc/1/root/run/argo/ctr/main/after","title":"Example"},{"location":"default-workflow-specs/","text":"Default Workflow Spec \u00b6 v2.7 and after Introduction \u00b6 Default Workflow spec values can be set at the controller config map that will apply to all Workflows executed from said controller. If a Workflow has a value that also has a default value set in the config map, the Workflow's value will take precedence. Setting Default Workflow Values \u00b6 Default Workflow values can be specified by adding them under the workflowDefaults key in the workflow-controller-configmap . Values can be added as they would under the Workflow.spec tag. For example, to specify default values that would partially produce the following Workflow : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : gc-ttl- annotations : argo : workflows labels : foo : bar spec : ttlStrategy : secondsAfterSuccess : 5 # Time to live after workflow is successful parallelism : 3 The following would be specified in the Config Map: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 parallelism: 3","title":"Default Workflow Spec"},{"location":"default-workflow-specs/#default-workflow-spec","text":"v2.7 and after","title":"Default Workflow Spec"},{"location":"default-workflow-specs/#introduction","text":"Default Workflow spec values can be set at the controller config map that will apply to all Workflows executed from said controller. If a Workflow has a value that also has a default value set in the config map, the Workflow's value will take precedence.","title":"Introduction"},{"location":"default-workflow-specs/#setting-default-workflow-values","text":"Default Workflow values can be specified by adding them under the workflowDefaults key in the workflow-controller-configmap . Values can be added as they would under the Workflow.spec tag. For example, to specify default values that would partially produce the following Workflow : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : gc-ttl- annotations : argo : workflows labels : foo : bar spec : ttlStrategy : secondsAfterSuccess : 5 # Time to live after workflow is successful parallelism : 3 The following would be specified in the Config Map: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 parallelism: 3","title":"Setting Default Workflow Values"},{"location":"disaster-recovery/","text":"Disaster Recovery (DR) \u00b6 We only store data in your Kubernetes cluster. You should consider backing this up regularly. Exporting example: kubectl get wf,cwf,cwft,wftmpl -A -o yaml > backup.yaml Importing example: kubectl apply -f backup.yaml You should also back-up any SQL persistence you use regularly with whatever tool is provided with it.","title":"Disaster Recovery (DR)"},{"location":"disaster-recovery/#disaster-recovery-dr","text":"We only store data in your Kubernetes cluster. You should consider backing this up regularly. Exporting example: kubectl get wf,cwf,cwft,wftmpl -A -o yaml > backup.yaml Importing example: kubectl apply -f backup.yaml You should also back-up any SQL persistence you use regularly with whatever tool is provided with it.","title":"Disaster Recovery (DR)"},{"location":"doc-changes/","text":"Documentation Changes \u00b6 Docs help our customers understand how to use workflows and fix their own problems. Doc changes are checked for spelling, broken links, and lint issues by CI. To check locally, run make docs . General guidelines: Explain when you would want to use a feature. Provide working examples. Format code using back-ticks to avoid it being reported as a spelling error. Prefer 1 sentence per line of markdown Follow the recommendations in the official Kubernetes Documentation Style Guide . Particularly useful sections include Content best practices and Patterns to avoid . Note : Argo does not use the same tooling, so the sections on \"shortcodes\" and \"EditorConfig\" are not relevant. Running Locally \u00b6 To test/run locally: make docs-serve Tips \u00b6 Use a service like Grammarly to check your grammar. Having your computer read text out loud is a way to catch problems, e.g.: Word substitutions (i.e. the wrong word is used, but spelled. correctly). Sentences that do not read correctly will sound wrong. On Mac, to set-up: Go to System Preferences / Accessibility / Spoken Content . Choose a System Voice (I like Siri Voice 1 ). Enable Speak selection . To hear text, select the text you want to hear, then press option+escape.","title":"Documentation Changes"},{"location":"doc-changes/#documentation-changes","text":"Docs help our customers understand how to use workflows and fix their own problems. Doc changes are checked for spelling, broken links, and lint issues by CI. To check locally, run make docs . General guidelines: Explain when you would want to use a feature. Provide working examples. Format code using back-ticks to avoid it being reported as a spelling error. Prefer 1 sentence per line of markdown Follow the recommendations in the official Kubernetes Documentation Style Guide . Particularly useful sections include Content best practices and Patterns to avoid . Note : Argo does not use the same tooling, so the sections on \"shortcodes\" and \"EditorConfig\" are not relevant.","title":"Documentation Changes"},{"location":"doc-changes/#running-locally","text":"To test/run locally: make docs-serve","title":"Running Locally"},{"location":"doc-changes/#tips","text":"Use a service like Grammarly to check your grammar. Having your computer read text out loud is a way to catch problems, e.g.: Word substitutions (i.e. the wrong word is used, but spelled. correctly). Sentences that do not read correctly will sound wrong. On Mac, to set-up: Go to System Preferences / Accessibility / Spoken Content . Choose a System Voice (I like Siri Voice 1 ). Enable Speak selection . To hear text, select the text you want to hear, then press option+escape.","title":"Tips"},{"location":"empty-dir/","text":"Empty Dir \u00b6 While by default, the Docker and PNS workflow executors can get output artifacts/parameters from the base layer (e.g. /tmp ), neither the Kubelet nor the K8SAPI executors can. It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a security context . You can work-around this constraint by mounting volumes onto your pod. The easiest way to do this is to use as emptyDir volume. Note This is only needed for output artifacts/parameters. Input artifacts/parameters are automatically mounted to an empty-dir if needed This example shows how to mount an output volume: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : empty-dir- spec : entrypoint : main templates : - name : main container : image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"cowsay hello world | tee /mnt/out/hello_world.txt\" ] volumeMounts : - name : out mountPath : /mnt/out volumes : - name : out emptyDir : { } outputs : parameters : - name : message valueFrom : path : /mnt/out/hello_world.txt","title":"Empty Dir"},{"location":"empty-dir/#empty-dir","text":"While by default, the Docker and PNS workflow executors can get output artifacts/parameters from the base layer (e.g. /tmp ), neither the Kubelet nor the K8SAPI executors can. It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a security context . You can work-around this constraint by mounting volumes onto your pod. The easiest way to do this is to use as emptyDir volume. Note This is only needed for output artifacts/parameters. Input artifacts/parameters are automatically mounted to an empty-dir if needed This example shows how to mount an output volume: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : empty-dir- spec : entrypoint : main templates : - name : main container : image : argoproj/argosay:v2 command : [ sh , -c ] args : [ \"cowsay hello world | tee /mnt/out/hello_world.txt\" ] volumeMounts : - name : out mountPath : /mnt/out volumes : - name : out emptyDir : { } outputs : parameters : - name : message valueFrom : path : /mnt/out/hello_world.txt","title":"Empty Dir"},{"location":"enhanced-depends-logic/","text":"Enhanced Depends Logic \u00b6 v2.9 and after Introduction \u00b6 Previous to version 2.8, the only way to specify dependencies in DAG templates was to use the dependencies field and specify a list of other tasks the current task depends on. This syntax was limiting because it does not allow the user to specify which result of the task to depend on. For example, a task may only be relevant to run if the dependent task succeeded (or failed, etc.). Depends \u00b6 To remedy this, there exists a new field called depends , which allows users to specify dependent tasks, their statuses, as well as any complex boolean logic. The field is a string field and the syntax is expression-like with operands having form . . Examples include task-1.Succeeded , task-2.Failed , task-3.Daemoned . The full list of available task results is as follows: Task Result Description Meaning .Succeeded Task Succeeded Task finished with no error .Failed Task Failed Task exited with a non-0 exit code .Errored Task Errored Task had an error other than a non-0 exit code .Skipped Task Skipped Task was skipped .Omitted Task Omitted Task was omitted .Daemoned Task is Daemoned and is not Pending For convenience, if an omitted task result is equivalent to (task.Succeeded || task.Skipped || task.Daemoned) . For example: depends : \"task || task-2.Failed\" is equivalent to: depends : (task.Succeeded || task.Skipped || task.Daemoned) || task-2.Failed Full boolean logic is also available. Operators include: && || ! Example: depends : \"(task-2.Succeeded || task-2.Skipped) && !task-3.Failed\" In the case that you're depending on a task that uses withItems , you can depend on whether any of the item tasks are successful or all have failed using .AnySucceeded and .AllFailed , for example: depends : \"task-1.AnySucceeded || task-2.AllFailed\" Compatibility with dependencies and dag.task.continueOn \u00b6 This feature is fully compatible with dependencies and conversion is easy. To convert simply join your dependencies with && : dependencies : [ \"A\" , \"B\" , \"C\" ] is equivalent to: depends : \"A && B && C\" Because of the added control found in depends , the dag.task.continueOn is not available when using it. Furthermore, it is not possible to use both dependencies and depends in the same task group.","title":"Enhanced Depends Logic"},{"location":"enhanced-depends-logic/#enhanced-depends-logic","text":"v2.9 and after","title":"Enhanced Depends Logic"},{"location":"enhanced-depends-logic/#introduction","text":"Previous to version 2.8, the only way to specify dependencies in DAG templates was to use the dependencies field and specify a list of other tasks the current task depends on. This syntax was limiting because it does not allow the user to specify which result of the task to depend on. For example, a task may only be relevant to run if the dependent task succeeded (or failed, etc.).","title":"Introduction"},{"location":"enhanced-depends-logic/#depends","text":"To remedy this, there exists a new field called depends , which allows users to specify dependent tasks, their statuses, as well as any complex boolean logic. The field is a string field and the syntax is expression-like with operands having form . . Examples include task-1.Succeeded , task-2.Failed , task-3.Daemoned . The full list of available task results is as follows: Task Result Description Meaning .Succeeded Task Succeeded Task finished with no error .Failed Task Failed Task exited with a non-0 exit code .Errored Task Errored Task had an error other than a non-0 exit code .Skipped Task Skipped Task was skipped .Omitted Task Omitted Task was omitted .Daemoned Task is Daemoned and is not Pending For convenience, if an omitted task result is equivalent to (task.Succeeded || task.Skipped || task.Daemoned) . For example: depends : \"task || task-2.Failed\" is equivalent to: depends : (task.Succeeded || task.Skipped || task.Daemoned) || task-2.Failed Full boolean logic is also available. Operators include: && || ! Example: depends : \"(task-2.Succeeded || task-2.Skipped) && !task-3.Failed\" In the case that you're depending on a task that uses withItems , you can depend on whether any of the item tasks are successful or all have failed using .AnySucceeded and .AllFailed , for example: depends : \"task-1.AnySucceeded || task-2.AllFailed\"","title":"Depends"},{"location":"enhanced-depends-logic/#compatibility-with-dependencies-and-dagtaskcontinueon","text":"This feature is fully compatible with dependencies and conversion is easy. To convert simply join your dependencies with && : dependencies : [ \"A\" , \"B\" , \"C\" ] is equivalent to: depends : \"A && B && C\" Because of the added control found in depends , the dag.task.continueOn is not available when using it. Furthermore, it is not possible to use both dependencies and depends in the same task group.","title":"Compatibility with dependencies and dag.task.continueOn"},{"location":"environment-variables/","text":"Environment Variables \u00b6 This document outlines environment variables that can be used to customize behavior. Warning Environment variables are typically added to test out experimental features and should not be used by most users. Environment variables may be removed at any time. Controller \u00b6 Name Type Default Description ARGO_AGENT_TASK_WORKERS int 16 The number of task workers for the agent pod. ALL_POD_CHANGES_SIGNIFICANT bool false Whether to consider all pod changes as significant during pod reconciliation. ALWAYS_OFFLOAD_NODE_STATUS bool false Whether to always offload the node status. ARCHIVED_WORKFLOW_GC_PERIOD time.Duration 24h The periodicity for GC of archived workflows. ARGO_PPROF bool false Enable pprof endpoints ARGO_PROGRESS_PATCH_TICK_DURATION time.Duration 1m How often self reported progress is patched into the pod annotations which means how long it takes until the controller picks up the progress change. Set to 0 to disable self reporting progress. ARGO_PROGRESS_FILE_TICK_DURATION time.Duration 3s How often the progress file is read by the executor. Set to 0 to disable self reporting progress. ARGO_REMOVE_PVC_PROTECTION_FINALIZER bool true Remove the kubernetes.io/pvc-protection finalizer from persistent volume claims (PVC) after marking PVCs created for the workflow for deletion, so deleted is not blocked until the pods are deleted. #6629 ARGO_TRACE string `` Whether to enable tracing statements in Argo components. ARGO_AGENT_PATCH_RATE time.Duration DEFAULT_REQUEUE_TIME Rate that the Argo Agent will patch the workflow task-set. ARGO_AGENT_CPU_LIMIT resource.Quantity 100m CPU resource limit for the agent. ARGO_AGENT_MEMORY_LIMIT resource.Quantity 256m Memory resource limit for the agent. BUBBLE_ENTRY_TEMPLATE_ERR bool true Whether to bubble up template errors to workflow. CACHE_GC_PERIOD time.Duration 0s How often to perform memoization cache GC, which is disabled by default and can be enabled by providing a non-zero duration. CACHE_GC_AFTER_NOT_HIT_DURATION time.Duration 30s When a memoization cache has not been hit after this duration, it will be deleted. CRON_SYNC_PERIOD time.Duration 10s How often to sync cron workflows. DEFAULT_REQUEUE_TIME time.Duration 10s The re-queue time for the rate limiter of the workflow queue. DISABLE_MAX_RECURSION bool false Set to true to disable the recursion preventer, which will stop a workflow running which has called into a child template 100 times EXPRESSION_TEMPLATES bool true Escape hatch to disable expression templates. EVENT_AGGREGATION_WITH_ANNOTATIONS bool false Whether event annotations will be used when aggregating events. GZIP_IMPLEMENTATION string PGZip The implementation of compression/decompression. Currently only \" PGZip \" and \" GZip \" are supported. INFORMER_WRITE_BACK bool true Whether to write back to informer instead of catching up. HEALTHZ_AGE time.Duration 5m How old a un-reconciled workflow is to report unhealthy. INDEX_WORKFLOW_SEMAPHORE_KEYS bool true Whether or not to index semaphores. LEADER_ELECTION_IDENTITY string Controller's metadata.name The ID used for workflow controllers to elect a leader. LEADER_ELECTION_DISABLE bool false Whether leader election should be disabled. LEADER_ELECTION_LEASE_DURATION time.Duration 15s The duration that non-leader candidates will wait to force acquire leadership. LEADER_ELECTION_RENEW_DEADLINE time.Duration 10s The duration that the acting master will retry refreshing leadership before giving up. LEADER_ELECTION_RETRY_PERIOD time.Duration 5s The duration that the leader election clients should wait between tries of actions. MAX_OPERATION_TIME time.Duration 30s The maximum time a workflow operation is allowed to run for before re-queuing the workflow onto the work queue. OFFLOAD_NODE_STATUS_TTL time.Duration 5m The TTL to delete the offloaded node status. Currently only used for testing. OPERATION_DURATION_METRIC_BUCKET_COUNT int 6 The number of buckets to collect the metric for the operation duration. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Argo Server. RECENTLY_STARTED_POD_DURATION time.Duration 10s The duration of a pod before the pod is considered to be recently started. RETRY_BACKOFF_DURATION time.Duration 10ms The retry back-off duration when retrying API calls. RETRY_BACKOFF_FACTOR float 2.0 The retry back-off factor when retrying API calls. RETRY_BACKOFF_STEPS int 5 The retry back-off steps when retrying API calls. RETRY_HOST_NAME_LABEL_KEY string kubernetes.io/hostname The label key for host name used when retrying templates. TRANSIENT_ERROR_PATTERN string \"\" The regular expression that represents additional patterns for transient errors. WF_DEL_PROPAGATION_POLICY string \"\" The deletion propagation policy for workflows. WORKFLOW_GC_PERIOD time.Duration 5m The periodicity for GC of workflows. SEMAPHORE_NOTIFY_DELAY time.Duration 1s Tuning Delay when notifying semaphore waiters about availability in the semaphore CLI parameters of the Controller can be specified as environment variables with the ARGO_ prefix. For example: workflow-controller --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo workflow-controller You can set environment variables for the Controller Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : selector : matchLabels : app : workflow-controller template : metadata : labels : app : workflow-controller spec : containers : - env : - name : WORKFLOW_GC_PERIOD value : 30s Executor \u00b6 Name Type Default Description EXECUTOR_RETRY_BACKOFF_DURATION time.Duration 1s The retry back-off duration when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_FACTOR float 1.6 The retry back-off factor when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_JITTER float 0.5 The retry back-off jitter when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_STEPS int 5 The retry back-off steps when the workflow executor performs retries. REMOVE_LOCAL_ART_PATH bool false Whether to remove local artifacts. RESOURCE_STATE_CHECK_INTERVAL time.Duration 5s The time interval between resource status checks against the specified success and failure conditions. WAIT_CONTAINER_STATUS_CHECK_INTERVAL time.Duration 5s The time interval for wait container to check whether the containers have completed. You can set environment variables for the Executor in your workflow-controller-configmap like the following: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | executor: env: - name: RESOURCE_STATE_CHECK_INTERVAL value: 3s Argo Server \u00b6 Name Type Default Description DISABLE_VALUE_LIST_RETRIEVAL_KEY_PATTERN string \"\" Disable the retrieval of the list of label values for keys based on this regular expression. FIRST_TIME_USER_MODAL bool true Show this modal. FEEDBACK_MODAL bool true Show this modal. NEW_VERSION_MODAL bool true Show this modal. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Controller GRPC_MESSAGE_SIZE string 104857600 Use different GRPC Max message size for Server (supporting huge workflows). CLI parameters of the Server can be specified as environment variables with the ARGO_ prefix. For example: argo server --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo argo server You can set environment variables for the Server Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server image : argoproj/argocli:latest name : argo-server env : - name : GRPC_MESSAGE_SIZE value : \"209715200\" ports : # ...","title":"Environment Variables"},{"location":"environment-variables/#environment-variables","text":"This document outlines environment variables that can be used to customize behavior. Warning Environment variables are typically added to test out experimental features and should not be used by most users. Environment variables may be removed at any time.","title":"Environment Variables"},{"location":"environment-variables/#controller","text":"Name Type Default Description ARGO_AGENT_TASK_WORKERS int 16 The number of task workers for the agent pod. ALL_POD_CHANGES_SIGNIFICANT bool false Whether to consider all pod changes as significant during pod reconciliation. ALWAYS_OFFLOAD_NODE_STATUS bool false Whether to always offload the node status. ARCHIVED_WORKFLOW_GC_PERIOD time.Duration 24h The periodicity for GC of archived workflows. ARGO_PPROF bool false Enable pprof endpoints ARGO_PROGRESS_PATCH_TICK_DURATION time.Duration 1m How often self reported progress is patched into the pod annotations which means how long it takes until the controller picks up the progress change. Set to 0 to disable self reporting progress. ARGO_PROGRESS_FILE_TICK_DURATION time.Duration 3s How often the progress file is read by the executor. Set to 0 to disable self reporting progress. ARGO_REMOVE_PVC_PROTECTION_FINALIZER bool true Remove the kubernetes.io/pvc-protection finalizer from persistent volume claims (PVC) after marking PVCs created for the workflow for deletion, so deleted is not blocked until the pods are deleted. #6629 ARGO_TRACE string `` Whether to enable tracing statements in Argo components. ARGO_AGENT_PATCH_RATE time.Duration DEFAULT_REQUEUE_TIME Rate that the Argo Agent will patch the workflow task-set. ARGO_AGENT_CPU_LIMIT resource.Quantity 100m CPU resource limit for the agent. ARGO_AGENT_MEMORY_LIMIT resource.Quantity 256m Memory resource limit for the agent. BUBBLE_ENTRY_TEMPLATE_ERR bool true Whether to bubble up template errors to workflow. CACHE_GC_PERIOD time.Duration 0s How often to perform memoization cache GC, which is disabled by default and can be enabled by providing a non-zero duration. CACHE_GC_AFTER_NOT_HIT_DURATION time.Duration 30s When a memoization cache has not been hit after this duration, it will be deleted. CRON_SYNC_PERIOD time.Duration 10s How often to sync cron workflows. DEFAULT_REQUEUE_TIME time.Duration 10s The re-queue time for the rate limiter of the workflow queue. DISABLE_MAX_RECURSION bool false Set to true to disable the recursion preventer, which will stop a workflow running which has called into a child template 100 times EXPRESSION_TEMPLATES bool true Escape hatch to disable expression templates. EVENT_AGGREGATION_WITH_ANNOTATIONS bool false Whether event annotations will be used when aggregating events. GZIP_IMPLEMENTATION string PGZip The implementation of compression/decompression. Currently only \" PGZip \" and \" GZip \" are supported. INFORMER_WRITE_BACK bool true Whether to write back to informer instead of catching up. HEALTHZ_AGE time.Duration 5m How old a un-reconciled workflow is to report unhealthy. INDEX_WORKFLOW_SEMAPHORE_KEYS bool true Whether or not to index semaphores. LEADER_ELECTION_IDENTITY string Controller's metadata.name The ID used for workflow controllers to elect a leader. LEADER_ELECTION_DISABLE bool false Whether leader election should be disabled. LEADER_ELECTION_LEASE_DURATION time.Duration 15s The duration that non-leader candidates will wait to force acquire leadership. LEADER_ELECTION_RENEW_DEADLINE time.Duration 10s The duration that the acting master will retry refreshing leadership before giving up. LEADER_ELECTION_RETRY_PERIOD time.Duration 5s The duration that the leader election clients should wait between tries of actions. MAX_OPERATION_TIME time.Duration 30s The maximum time a workflow operation is allowed to run for before re-queuing the workflow onto the work queue. OFFLOAD_NODE_STATUS_TTL time.Duration 5m The TTL to delete the offloaded node status. Currently only used for testing. OPERATION_DURATION_METRIC_BUCKET_COUNT int 6 The number of buckets to collect the metric for the operation duration. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Argo Server. RECENTLY_STARTED_POD_DURATION time.Duration 10s The duration of a pod before the pod is considered to be recently started. RETRY_BACKOFF_DURATION time.Duration 10ms The retry back-off duration when retrying API calls. RETRY_BACKOFF_FACTOR float 2.0 The retry back-off factor when retrying API calls. RETRY_BACKOFF_STEPS int 5 The retry back-off steps when retrying API calls. RETRY_HOST_NAME_LABEL_KEY string kubernetes.io/hostname The label key for host name used when retrying templates. TRANSIENT_ERROR_PATTERN string \"\" The regular expression that represents additional patterns for transient errors. WF_DEL_PROPAGATION_POLICY string \"\" The deletion propagation policy for workflows. WORKFLOW_GC_PERIOD time.Duration 5m The periodicity for GC of workflows. SEMAPHORE_NOTIFY_DELAY time.Duration 1s Tuning Delay when notifying semaphore waiters about availability in the semaphore CLI parameters of the Controller can be specified as environment variables with the ARGO_ prefix. For example: workflow-controller --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo workflow-controller You can set environment variables for the Controller Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : selector : matchLabels : app : workflow-controller template : metadata : labels : app : workflow-controller spec : containers : - env : - name : WORKFLOW_GC_PERIOD value : 30s","title":"Controller"},{"location":"environment-variables/#executor","text":"Name Type Default Description EXECUTOR_RETRY_BACKOFF_DURATION time.Duration 1s The retry back-off duration when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_FACTOR float 1.6 The retry back-off factor when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_JITTER float 0.5 The retry back-off jitter when the workflow executor performs retries. EXECUTOR_RETRY_BACKOFF_STEPS int 5 The retry back-off steps when the workflow executor performs retries. REMOVE_LOCAL_ART_PATH bool false Whether to remove local artifacts. RESOURCE_STATE_CHECK_INTERVAL time.Duration 5s The time interval between resource status checks against the specified success and failure conditions. WAIT_CONTAINER_STATUS_CHECK_INTERVAL time.Duration 5s The time interval for wait container to check whether the containers have completed. You can set environment variables for the Executor in your workflow-controller-configmap like the following: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | executor: env: - name: RESOURCE_STATE_CHECK_INTERVAL value: 3s","title":"Executor"},{"location":"environment-variables/#argo-server","text":"Name Type Default Description DISABLE_VALUE_LIST_RETRIEVAL_KEY_PATTERN string \"\" Disable the retrieval of the list of label values for keys based on this regular expression. FIRST_TIME_USER_MODAL bool true Show this modal. FEEDBACK_MODAL bool true Show this modal. NEW_VERSION_MODAL bool true Show this modal. POD_NAMES string v2 Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Controller GRPC_MESSAGE_SIZE string 104857600 Use different GRPC Max message size for Server (supporting huge workflows). CLI parameters of the Server can be specified as environment variables with the ARGO_ prefix. For example: argo server --managed-namespace = argo Can be expressed as: ARGO_MANAGED_NAMESPACE = argo argo server You can set environment variables for the Server Deployment's container spec like the following: apiVersion : apps/v1 kind : Deployment metadata : name : argo-server spec : selector : matchLabels : app : argo-server template : metadata : labels : app : argo-server spec : containers : - args : - server image : argoproj/argocli:latest name : argo-server env : - name : GRPC_MESSAGE_SIZE value : \"209715200\" ports : # ...","title":"Argo Server"},{"location":"estimated-duration/","text":"Estimated Duration \u00b6 v2.12 and after When you run a workflow, the controller will try to estimate its duration. This is based on the most recently successful workflow submitted from the same workflow template, cluster workflow template or cron workflow. To get this data, the controller queries the Kubernetes API first (as this is faster) and then workflow archive (if enabled). If you've used tools like Jenkins, you'll know that that estimates can be inaccurate: A pod spent a long amount of time pending scheduling. The workflow is non-deterministic, e.g. it uses when to execute different paths. The workflow can vary is scale, e.g. sometimes it uses withItems and so sometimes run 100 nodes, sometimes a 1000. If the pod runtimes are unpredictable. The workflow is parametrized, and different parameters affect its duration.","title":"Estimated Duration"},{"location":"estimated-duration/#estimated-duration","text":"v2.12 and after When you run a workflow, the controller will try to estimate its duration. This is based on the most recently successful workflow submitted from the same workflow template, cluster workflow template or cron workflow. To get this data, the controller queries the Kubernetes API first (as this is faster) and then workflow archive (if enabled). If you've used tools like Jenkins, you'll know that that estimates can be inaccurate: A pod spent a long amount of time pending scheduling. The workflow is non-deterministic, e.g. it uses when to execute different paths. The workflow can vary is scale, e.g. sometimes it uses withItems and so sometimes run 100 nodes, sometimes a 1000. If the pod runtimes are unpredictable. The workflow is parametrized, and different parameters affect its duration.","title":"Estimated Duration"},{"location":"events/","text":"Events \u00b6 v2.11 and after Overview \u00b6 To support external webhooks, we have this endpoint /api/v1/events/{namespace}/{discriminator} . Events sent to that can be any JSON data. These events can submit workflow templates or cluster workflow templates . You may also wish to read about webhooks . Authentication and Security \u00b6 Clients wanting to send events to the endpoint need an access token . It is only possible to submit workflow templates your access token has access to: example role . Example (note the trailing slash): curl https://localhost:2746/api/v1/events/argo/ \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' With a discriminator : curl https://localhost:2746/api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' The event endpoint will always return in under 10 seconds because the event will be queued and processed asynchronously. This means you will not be notified synchronously of failure. It will return a failure (503) if the event processing queue is full. Processing Order Events may not always be processed in the order they are received. Workflow Template triggered by the event \u00b6 Before the binding between an event and a workflow template, you must create the workflow template that you want to trigger. The following one takes in input the \"message\" parameter specified into the API call body, passed through the WorkflowEventBinding parameters section, and finally resolved here as the message of the whalesay image. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : my-wf-tmple namespace : argo spec : templates : - name : main inputs : parameters : - name : message value : \"{{workflow.parameters.message}}\" container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] entrypoint : main Submitting A Workflow From A Workflow Template \u00b6 A workflow template will be submitted (i.e. workflow created from it) and that can be created using parameters from the event itself. The following example will be triggered by an event with \"message\" in the payload. That message will be used as an argument for the created workflow. Note that the name of the meta-data header \"x-argo-e2e\" is lowercase in the selector to match. Incoming header names are converted to lowercase. apiVersion : argoproj.io/v1alpha1 kind : WorkflowEventBinding metadata : name : event-consumer spec : event : # metadata header name must be lowercase to match in selector selector : payload.message != \"\" && metadata[\"x-argo-e2e\"] == [\"true\"] && discriminator == \"my-discriminator\" submit : workflowTemplateRef : name : my-wf-tmple arguments : parameters : - name : message valueFrom : event : payload.message Please, notice that workflowTemplateRef refers to a template with the name my-wf-tmple , this template has to be created before the triggering of the event. After that you have to apply the above explained WorkflowEventBinding (in this example this is called event-template.yml ) to realize the binding between Workflow Template and event (you can use kubectl to do that): kubectl apply -f event-template.yml Finally you can trigger the creation of your first parametrized workflow template, by using the following call: Event: curl $ARGO_SERVER /api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -H \"X-Argo-E2E: true\" \\ -d '{\"message\": \"hello events\"}' Malformed Expressions If the expression is malformed, this is logged. It is not visible in logs or the UI. Customizing the Workflow Meta-Data \u00b6 You can customize the name of the submitted workflow as well as add annotations and labels. This is done by adding a metadata object to the submit object. Normally the name of the workflow created from an event is simply the name of the template with a time-stamp appended. This can be customized by setting the name in the metadata object. Annotations and labels are added in the same fashion. All the values for the name, annotations and labels are treated as expressions (see below for details). The metadata object is the same metadata type as on all Kubernetes resources and as such is parsed in the same manner. It is best to enclose the expression in single quotes to avoid any problems when submitting the event binding to Kubernetes. This is an example snippet of how to set the name, annotations and labels. This is based on the workflow binding from above, and the first event. submit : metadata : annotations : anAnnotation : 'event.payload.message' name : 'event.payload.message + \"-world\"' labels : someLabel : '\"literal string\"' This will result in the workflow being named \"hello-world\" instead of my-wf-tmple- . There will be an extra label with the key someLabel and a value of \"literal string\". There will also be an extra annotation with the key anAnnotation and a value of \"hello\" Be careful when setting the name. If the name expression evaluates to that of a currently existing workflow, the new workflow will fail to submit. The name, annotation and label expression must evaluate to a string and follow the normal Kubernetes naming requirements . Event Expression Syntax and the Event Expression Environment \u00b6 Event expressions are expressions that are evaluated over the event expression environment . Expression Syntax \u00b6 Because the endpoint accepts any JSON data, it is the user's responsibility to write a suitable expression to correctly filter the events they are interested in. Therefore, DO NOT assume the existence of any fields, and guard against them using a nil check. Learn more about expression syntax . Expression Environment \u00b6 The event environment contains: payload the event payload. metadata event meta-data, including HTTP headers. discriminator the discriminator from the URL. Payload \u00b6 This is the JSON payload of the event. Example: payload.repository.clone_url == \"http://gihub.com/argoproj/argo\" Meta-Data \u00b6 Meta-data is data about the event, this includes headers : Headers \u00b6 HTTP header names are lowercase and only include those that have x- as their prefix. Their values are lists, not single values. Wrong: metadata[\"X-Github-Event\"] == \"push\" Wrong: metadata[\"x-github-event\"] == \"push\" Wrong: metadata[\"X-Github-Event\"] == [\"push\"] Wrong: metadata[\"github-event\"] == [\"push\"] Wrong: metadata[\"authorization\"] == [\"push\"] Right: metadata[\"x-github-event\"] == [\"push\"] Example: metadata[\"x-argo\"] == [\"yes\"] Discriminator \u00b6 This is only for edge-cases where neither the payload, or meta-data provide enough information to discriminate. Typically, it should be empty and ignored. Example: discriminator == \"my-discriminator\" High-Availability \u00b6 Run Minimum 2 Replicas You MUST run a minimum of two Argo Server replicas if you do not want to lose events. If you are processing large numbers of events, you may need to scale up the Argo Server to handle them. By default, a single Argo Server can be processing 64 events before the endpoint will start returning 503 errors. Vertically you can: Increase the size of the event operation queue --event-operation-queue-size (good for temporary event bursts). Increase the number of workers --event-worker-count (good for sustained numbers of events). Horizontally you can: Run more Argo Servers (good for sustained numbers of events AND high-availability).","title":"Events"},{"location":"events/#events","text":"v2.11 and after","title":"Events"},{"location":"events/#overview","text":"To support external webhooks, we have this endpoint /api/v1/events/{namespace}/{discriminator} . Events sent to that can be any JSON data. These events can submit workflow templates or cluster workflow templates . You may also wish to read about webhooks .","title":"Overview"},{"location":"events/#authentication-and-security","text":"Clients wanting to send events to the endpoint need an access token . It is only possible to submit workflow templates your access token has access to: example role . Example (note the trailing slash): curl https://localhost:2746/api/v1/events/argo/ \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' With a discriminator : curl https://localhost:2746/api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -d '{\"message\": \"hello\"}' The event endpoint will always return in under 10 seconds because the event will be queued and processed asynchronously. This means you will not be notified synchronously of failure. It will return a failure (503) if the event processing queue is full. Processing Order Events may not always be processed in the order they are received.","title":"Authentication and Security"},{"location":"events/#workflow-template-triggered-by-the-event","text":"Before the binding between an event and a workflow template, you must create the workflow template that you want to trigger. The following one takes in input the \"message\" parameter specified into the API call body, passed through the WorkflowEventBinding parameters section, and finally resolved here as the message of the whalesay image. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : my-wf-tmple namespace : argo spec : templates : - name : main inputs : parameters : - name : message value : \"{{workflow.parameters.message}}\" container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] entrypoint : main","title":"Workflow Template triggered by the event"},{"location":"events/#submitting-a-workflow-from-a-workflow-template","text":"A workflow template will be submitted (i.e. workflow created from it) and that can be created using parameters from the event itself. The following example will be triggered by an event with \"message\" in the payload. That message will be used as an argument for the created workflow. Note that the name of the meta-data header \"x-argo-e2e\" is lowercase in the selector to match. Incoming header names are converted to lowercase. apiVersion : argoproj.io/v1alpha1 kind : WorkflowEventBinding metadata : name : event-consumer spec : event : # metadata header name must be lowercase to match in selector selector : payload.message != \"\" && metadata[\"x-argo-e2e\"] == [\"true\"] && discriminator == \"my-discriminator\" submit : workflowTemplateRef : name : my-wf-tmple arguments : parameters : - name : message valueFrom : event : payload.message Please, notice that workflowTemplateRef refers to a template with the name my-wf-tmple , this template has to be created before the triggering of the event. After that you have to apply the above explained WorkflowEventBinding (in this example this is called event-template.yml ) to realize the binding between Workflow Template and event (you can use kubectl to do that): kubectl apply -f event-template.yml Finally you can trigger the creation of your first parametrized workflow template, by using the following call: Event: curl $ARGO_SERVER /api/v1/events/argo/my-discriminator \\ -H \"Authorization: $ARGO_TOKEN \" \\ -H \"X-Argo-E2E: true\" \\ -d '{\"message\": \"hello events\"}' Malformed Expressions If the expression is malformed, this is logged. It is not visible in logs or the UI.","title":"Submitting A Workflow From A Workflow Template"},{"location":"events/#customizing-the-workflow-meta-data","text":"You can customize the name of the submitted workflow as well as add annotations and labels. This is done by adding a metadata object to the submit object. Normally the name of the workflow created from an event is simply the name of the template with a time-stamp appended. This can be customized by setting the name in the metadata object. Annotations and labels are added in the same fashion. All the values for the name, annotations and labels are treated as expressions (see below for details). The metadata object is the same metadata type as on all Kubernetes resources and as such is parsed in the same manner. It is best to enclose the expression in single quotes to avoid any problems when submitting the event binding to Kubernetes. This is an example snippet of how to set the name, annotations and labels. This is based on the workflow binding from above, and the first event. submit : metadata : annotations : anAnnotation : 'event.payload.message' name : 'event.payload.message + \"-world\"' labels : someLabel : '\"literal string\"' This will result in the workflow being named \"hello-world\" instead of my-wf-tmple- . There will be an extra label with the key someLabel and a value of \"literal string\". There will also be an extra annotation with the key anAnnotation and a value of \"hello\" Be careful when setting the name. If the name expression evaluates to that of a currently existing workflow, the new workflow will fail to submit. The name, annotation and label expression must evaluate to a string and follow the normal Kubernetes naming requirements .","title":"Customizing the Workflow Meta-Data"},{"location":"events/#event-expression-syntax-and-the-event-expression-environment","text":"Event expressions are expressions that are evaluated over the event expression environment .","title":"Event Expression Syntax and the Event Expression Environment"},{"location":"events/#expression-syntax","text":"Because the endpoint accepts any JSON data, it is the user's responsibility to write a suitable expression to correctly filter the events they are interested in. Therefore, DO NOT assume the existence of any fields, and guard against them using a nil check. Learn more about expression syntax .","title":"Expression Syntax"},{"location":"events/#expression-environment","text":"The event environment contains: payload the event payload. metadata event meta-data, including HTTP headers. discriminator the discriminator from the URL.","title":"Expression Environment"},{"location":"events/#payload","text":"This is the JSON payload of the event. Example: payload.repository.clone_url == \"http://gihub.com/argoproj/argo\"","title":"Payload"},{"location":"events/#meta-data","text":"Meta-data is data about the event, this includes headers :","title":"Meta-Data"},{"location":"events/#headers","text":"HTTP header names are lowercase and only include those that have x- as their prefix. Their values are lists, not single values. Wrong: metadata[\"X-Github-Event\"] == \"push\" Wrong: metadata[\"x-github-event\"] == \"push\" Wrong: metadata[\"X-Github-Event\"] == [\"push\"] Wrong: metadata[\"github-event\"] == [\"push\"] Wrong: metadata[\"authorization\"] == [\"push\"] Right: metadata[\"x-github-event\"] == [\"push\"] Example: metadata[\"x-argo\"] == [\"yes\"]","title":"Headers"},{"location":"events/#discriminator","text":"This is only for edge-cases where neither the payload, or meta-data provide enough information to discriminate. Typically, it should be empty and ignored. Example: discriminator == \"my-discriminator\"","title":"Discriminator"},{"location":"events/#high-availability","text":"Run Minimum 2 Replicas You MUST run a minimum of two Argo Server replicas if you do not want to lose events. If you are processing large numbers of events, you may need to scale up the Argo Server to handle them. By default, a single Argo Server can be processing 64 events before the endpoint will start returning 503 errors. Vertically you can: Increase the size of the event operation queue --event-operation-queue-size (good for temporary event bursts). Increase the number of workers --event-worker-count (good for sustained numbers of events). Horizontally you can: Run more Argo Servers (good for sustained numbers of events AND high-availability).","title":"High-Availability"},{"location":"executor_plugins/","text":"Executor Plugins \u00b6 Since v3.3 Configuration \u00b6 Plugins are disabled by default. To enable them, start the controller with ARGO_EXECUTOR_PLUGINS=true , e.g. apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : template : spec : containers : - name : workflow-controller env : - name : ARGO_EXECUTOR_PLUGINS value : \"true\" When using the Helm chart , add this to your values.yaml : controller : extraEnv : - name : ARGO_EXECUTOR_PLUGINS value : \"true\" Template Executor \u00b6 This is a plugin that runs custom \"plugin\" templates, e.g. for non-pod tasks such as Tekton builds, Spark jobs, sending Slack notifications. A Simple Python Plugin \u00b6 Let's make a Python plugin that prints \"hello\" each time the workflow is operated on. We need the following: Plugins enabled (see above). A HTTP server that will be run as a sidecar to the main container and will respond to RPC HTTP requests from the executor with this API contract . A plugin.yaml configuration file, that is turned into a config map so the controller can discover the plugin. A template executor plugin services HTTP POST requests on /api/v1/template.execute : curl http://localhost:4355/api/v1/template.execute -d \\ '{ \"workflow\": { \"metadata\": { \"name\": \"my-wf\" } }, \"template\": { \"name\": \"my-tmpl\", \"inputs\": {}, \"outputs\": {}, \"plugin\": { \"hello\": {} } } }' # ... HTTP/1.1 200 OK { \"node\" : { \"phase\" : \"Succeeded\" , \"message\" : \"Hello template!\" } } Tip: The port number can be anything, but must not conflict with other plugins. Don't use common ports such as 80, 443, 8080, 8081, 8443. If you plan to publish your plugin, choose a random port number under 10,000 and create a PR to add your plugin. If not, use a port number greater than 10,000. We'll need to create a script that starts a HTTP server. Save this as server.py : import json from http.server import BaseHTTPRequestHandler , HTTPServer with open ( \"/var/run/argo/token\" ) as f : token = f . read () . strip () class Plugin ( BaseHTTPRequestHandler ): def args ( self ): return json . loads ( self . rfile . read ( int ( self . headers . get ( 'Content-Length' )))) def reply ( self , reply ): self . send_response ( 200 ) self . end_headers () self . wfile . write ( json . dumps ( reply ) . encode ( \"UTF-8\" )) def forbidden ( self ): self . send_response ( 403 ) self . end_headers () def unsupported ( self ): self . send_response ( 404 ) self . end_headers () def do_POST ( self ): if self . headers . get ( \"Authorization\" ) != \"Bearer \" + token : self . forbidden () elif self . path == '/api/v1/template.execute' : args = self . args () if 'hello' in args [ 'template' ] . get ( 'plugin' , {}): self . reply ( { 'node' : { 'phase' : 'Succeeded' , 'message' : 'Hello template!' , 'outputs' : { 'parameters' : [{ 'name' : 'foo' , 'value' : 'bar' }]}}}) else : self . reply ({}) else : self . unsupported () if __name__ == '__main__' : httpd = HTTPServer (( '' , 4355 ), Plugin ) httpd . serve_forever () Tip : Plugins can be written in any language you can run as a container. Python is convenient because you can embed the script in the container. Some things to note here: You only need to implement the calls you need. Return 404 and it won't be called again. The path is the RPC method name. You should check that the Authorization header contains the same value as /var/run/argo/token . Return 403 if not The request body contains the template's input parameters. The response body may contain the node's result, including the phase (e.g. \"Succeeded\" or \"Failed\") and a message. If the response is {} , then the plugin is saying it cannot execute the plugin template, e.g. it is a Slack plugin, but the template is a Tekton job. If the status code is 404, then the plugin will not be called again. If you save the file as server.* , it will be copied to the sidecar container's args field. This is useful for building self-contained plugins in scripting languages like Python or Node.JS. Next, create a manifest named plugin.yaml : apiVersion : argoproj.io/v1alpha1 kind : ExecutorPlugin metadata : name : hello spec : sidecar : container : command : - python - -u # disables output buffering - -c image : python:alpine3.6 name : hello-executor-plugin ports : - containerPort : 4355 securityContext : runAsNonRoot : true runAsUser : 65534 # nobody resources : requests : memory : \"64Mi\" cpu : \"250m\" limits : memory : \"128Mi\" cpu : \"500m\" Build and install as follows: argo executor-plugin build . kubectl -n argo apply -f hello-executor-plugin-configmap.yaml Check your controller logs: level=info msg=\"Executor plugin added\" name=hello-controller-plugin Run this workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello- spec : entrypoint : main templates : - name : main plugin : hello : { } You'll see the workflow complete successfully. Discovery \u00b6 When a workflow is run, plugins are loaded from: The workflow's namespace. The Argo installation namespace (typically argo ). If two plugins have the same name, only the one in the workflow's namespace is loaded. Secrets \u00b6 If you interact with a third-party system, you'll need access to secrets. Don't put them in plugin.yaml . Use a secret: spec : sidecar : container : env : - name : URL valueFrom : secretKeyRef : name : slack-executor-plugin key : URL Refer to the Kubernetes Secret documentation for secret best practices and security considerations. Resources, Security Context \u00b6 We made these mandatory, so no one can create a plugin that uses an unreasonable amount of memory, or run as root unless they deliberately do so: spec : sidecar : container : resources : requests : cpu : 100m memory : 32Mi limits : cpu : 200m memory : 64Mi securityContext : runAsNonRoot : true runAsUser : 1000 Failure \u00b6 A plugin may fail as follows: Connection/socket error - considered transient. Timeout - considered transient. 404 error - method is not supported by the plugin, as a result the method will not be called again (in the same workflow). 503 error - considered transient. Other 4xx/5xx errors - considered fatal. Transient errors are retried, all other errors are considered fatal. Fatal errors will result in failed steps. Re-Queue \u00b6 It might be the case that the plugin can't finish straight away. E.g. it starts a long running task. When that happens, you return \"Pending\" or \"Running\" a and a re-queue time: { \"node\" : { \"phase\" : \"Running\" , \"message\" : \"Long-running task started\" }, \"requeue\" : \"2m\" } In this example, the task will be re-queued and template.execute will be called again in 2 minutes. Debugging \u00b6 You can find the plugin's log in the agent pod's sidecar, e.g.: kubectl -n argo logs ${ agentPodName } -c hello-executor-plugin Listing Plugins \u00b6 Because plugins are just config maps, you can list them using kubectl : kubectl get cm -l workflows.argoproj.io/configmap-type = ExecutorPlugin Examples and Community Contributed Plugins \u00b6 Plugin directory Publishing Your Plugin \u00b6 If you want to publish and share you plugin (we hope you do!), then submit a pull request to add it to the above directory.","title":"Executor Plugins"},{"location":"executor_plugins/#executor-plugins","text":"Since v3.3","title":"Executor Plugins"},{"location":"executor_plugins/#configuration","text":"Plugins are disabled by default. To enable them, start the controller with ARGO_EXECUTOR_PLUGINS=true , e.g. apiVersion : apps/v1 kind : Deployment metadata : name : workflow-controller spec : template : spec : containers : - name : workflow-controller env : - name : ARGO_EXECUTOR_PLUGINS value : \"true\" When using the Helm chart , add this to your values.yaml : controller : extraEnv : - name : ARGO_EXECUTOR_PLUGINS value : \"true\"","title":"Configuration"},{"location":"executor_plugins/#template-executor","text":"This is a plugin that runs custom \"plugin\" templates, e.g. for non-pod tasks such as Tekton builds, Spark jobs, sending Slack notifications.","title":"Template Executor"},{"location":"executor_plugins/#a-simple-python-plugin","text":"Let's make a Python plugin that prints \"hello\" each time the workflow is operated on. We need the following: Plugins enabled (see above). A HTTP server that will be run as a sidecar to the main container and will respond to RPC HTTP requests from the executor with this API contract . A plugin.yaml configuration file, that is turned into a config map so the controller can discover the plugin. A template executor plugin services HTTP POST requests on /api/v1/template.execute : curl http://localhost:4355/api/v1/template.execute -d \\ '{ \"workflow\": { \"metadata\": { \"name\": \"my-wf\" } }, \"template\": { \"name\": \"my-tmpl\", \"inputs\": {}, \"outputs\": {}, \"plugin\": { \"hello\": {} } } }' # ... HTTP/1.1 200 OK { \"node\" : { \"phase\" : \"Succeeded\" , \"message\" : \"Hello template!\" } } Tip: The port number can be anything, but must not conflict with other plugins. Don't use common ports such as 80, 443, 8080, 8081, 8443. If you plan to publish your plugin, choose a random port number under 10,000 and create a PR to add your plugin. If not, use a port number greater than 10,000. We'll need to create a script that starts a HTTP server. Save this as server.py : import json from http.server import BaseHTTPRequestHandler , HTTPServer with open ( \"/var/run/argo/token\" ) as f : token = f . read () . strip () class Plugin ( BaseHTTPRequestHandler ): def args ( self ): return json . loads ( self . rfile . read ( int ( self . headers . get ( 'Content-Length' )))) def reply ( self , reply ): self . send_response ( 200 ) self . end_headers () self . wfile . write ( json . dumps ( reply ) . encode ( \"UTF-8\" )) def forbidden ( self ): self . send_response ( 403 ) self . end_headers () def unsupported ( self ): self . send_response ( 404 ) self . end_headers () def do_POST ( self ): if self . headers . get ( \"Authorization\" ) != \"Bearer \" + token : self . forbidden () elif self . path == '/api/v1/template.execute' : args = self . args () if 'hello' in args [ 'template' ] . get ( 'plugin' , {}): self . reply ( { 'node' : { 'phase' : 'Succeeded' , 'message' : 'Hello template!' , 'outputs' : { 'parameters' : [{ 'name' : 'foo' , 'value' : 'bar' }]}}}) else : self . reply ({}) else : self . unsupported () if __name__ == '__main__' : httpd = HTTPServer (( '' , 4355 ), Plugin ) httpd . serve_forever () Tip : Plugins can be written in any language you can run as a container. Python is convenient because you can embed the script in the container. Some things to note here: You only need to implement the calls you need. Return 404 and it won't be called again. The path is the RPC method name. You should check that the Authorization header contains the same value as /var/run/argo/token . Return 403 if not The request body contains the template's input parameters. The response body may contain the node's result, including the phase (e.g. \"Succeeded\" or \"Failed\") and a message. If the response is {} , then the plugin is saying it cannot execute the plugin template, e.g. it is a Slack plugin, but the template is a Tekton job. If the status code is 404, then the plugin will not be called again. If you save the file as server.* , it will be copied to the sidecar container's args field. This is useful for building self-contained plugins in scripting languages like Python or Node.JS. Next, create a manifest named plugin.yaml : apiVersion : argoproj.io/v1alpha1 kind : ExecutorPlugin metadata : name : hello spec : sidecar : container : command : - python - -u # disables output buffering - -c image : python:alpine3.6 name : hello-executor-plugin ports : - containerPort : 4355 securityContext : runAsNonRoot : true runAsUser : 65534 # nobody resources : requests : memory : \"64Mi\" cpu : \"250m\" limits : memory : \"128Mi\" cpu : \"500m\" Build and install as follows: argo executor-plugin build . kubectl -n argo apply -f hello-executor-plugin-configmap.yaml Check your controller logs: level=info msg=\"Executor plugin added\" name=hello-controller-plugin Run this workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello- spec : entrypoint : main templates : - name : main plugin : hello : { } You'll see the workflow complete successfully.","title":"A Simple Python Plugin"},{"location":"executor_plugins/#discovery","text":"When a workflow is run, plugins are loaded from: The workflow's namespace. The Argo installation namespace (typically argo ). If two plugins have the same name, only the one in the workflow's namespace is loaded.","title":"Discovery"},{"location":"executor_plugins/#secrets","text":"If you interact with a third-party system, you'll need access to secrets. Don't put them in plugin.yaml . Use a secret: spec : sidecar : container : env : - name : URL valueFrom : secretKeyRef : name : slack-executor-plugin key : URL Refer to the Kubernetes Secret documentation for secret best practices and security considerations.","title":"Secrets"},{"location":"executor_plugins/#resources-security-context","text":"We made these mandatory, so no one can create a plugin that uses an unreasonable amount of memory, or run as root unless they deliberately do so: spec : sidecar : container : resources : requests : cpu : 100m memory : 32Mi limits : cpu : 200m memory : 64Mi securityContext : runAsNonRoot : true runAsUser : 1000","title":"Resources, Security Context"},{"location":"executor_plugins/#failure","text":"A plugin may fail as follows: Connection/socket error - considered transient. Timeout - considered transient. 404 error - method is not supported by the plugin, as a result the method will not be called again (in the same workflow). 503 error - considered transient. Other 4xx/5xx errors - considered fatal. Transient errors are retried, all other errors are considered fatal. Fatal errors will result in failed steps.","title":"Failure"},{"location":"executor_plugins/#re-queue","text":"It might be the case that the plugin can't finish straight away. E.g. it starts a long running task. When that happens, you return \"Pending\" or \"Running\" a and a re-queue time: { \"node\" : { \"phase\" : \"Running\" , \"message\" : \"Long-running task started\" }, \"requeue\" : \"2m\" } In this example, the task will be re-queued and template.execute will be called again in 2 minutes.","title":"Re-Queue"},{"location":"executor_plugins/#debugging","text":"You can find the plugin's log in the agent pod's sidecar, e.g.: kubectl -n argo logs ${ agentPodName } -c hello-executor-plugin","title":"Debugging"},{"location":"executor_plugins/#listing-plugins","text":"Because plugins are just config maps, you can list them using kubectl : kubectl get cm -l workflows.argoproj.io/configmap-type = ExecutorPlugin","title":"Listing Plugins"},{"location":"executor_plugins/#examples-and-community-contributed-plugins","text":"Plugin directory","title":"Examples and Community Contributed Plugins"},{"location":"executor_plugins/#publishing-your-plugin","text":"If you want to publish and share you plugin (we hope you do!), then submit a pull request to add it to the above directory.","title":"Publishing Your Plugin"},{"location":"executor_swagger/","text":"The API for an executor plugin. \u00b6 Informations \u00b6 Version \u00b6 0.0.1 Content negotiation \u00b6 URI Schemes \u00b6 http Consumes \u00b6 application/json Produces \u00b6 application/json All endpoints \u00b6 operations \u00b6 Method URI Name Summary POST /api/v1/template.execute execute template Paths \u00b6 execute template ( executeTemplate ) \u00b6 POST /api/v1/template.execute Parameters \u00b6 Name Source Type Go type Separator Required Default Description Body body ExecuteTemplateArgs models.ExecuteTemplateArgs \u2713 All responses \u00b6 Code Status Description Has headers Schema 200 OK schema Responses \u00b6 200 \u00b6 Status: OK Schema \u00b6 ExecuteTemplateReply Models \u00b6 AWSElasticBlockStoreVolumeSource \u00b6 An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). +optional readOnly boolean bool readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore +optional volumeID string string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Affinity \u00b6 Properties Name Type Go type Required Default Description Example nodeAffinity NodeAffinity NodeAffinity podAffinity PodAffinity PodAffinity podAntiAffinity PodAntiAffinity PodAntiAffinity Amount \u00b6 +kubebuilder:validation:Type=number interface{} AnyString \u00b6 It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric. Name Type Go type Default Description Example AnyString string string It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric. ArchiveStrategy \u00b6 ArchiveStrategy describes how to archive files/directory when saving artifacts Properties Name Type Go type Required Default Description Example none NoneStrategy NoneStrategy tar TarStrategy TarStrategy zip ZipStrategy ZipStrategy Arguments \u00b6 Arguments to a template Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters is the list of parameters to pass to the template or workflow +patchStrategy=merge +patchMergeKey=name Artifact \u00b6 Artifact indicates an artifact to place at a specified path Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source ArtifactGC \u00b6 ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Properties Name Type Go type Required Default Description Example podMetadata Metadata Metadata serviceAccountName string string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy ArtifactGCStrategy ArtifactGCStrategy ArtifactGCStrategy \u00b6 Name Type Go type Default Description Example ArtifactGCStrategy string string ArtifactLocation \u00b6 It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Properties Name Type Go type Required Default Description Example archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact oss OSSArtifact OSSArtifact raw RawArtifact RawArtifact s3 S3Artifact S3Artifact ArtifactPaths \u00b6 ArtifactPaths expands a step from a collection of artifacts Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source ArtifactoryArtifact \u00b6 ArtifactoryArtifact is the location of an artifactory artifact Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector url string string URL of the artifact usernameSecret SecretKeySelector SecretKeySelector Artifacts \u00b6 [] Artifact AzureArtifact \u00b6 AzureArtifact is the location of a an Azure Storage artifact Properties Name Type Go type Required Default Description Example accountKeySecret SecretKeySelector SecretKeySelector blob string string Blob is the blob name (i.e., path) in the container where the artifact resides container string string Container is the container where resources will be stored endpoint string string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults. AzureDataDiskCachingMode \u00b6 +enum Name Type Go type Default Description Example AzureDataDiskCachingMode string string +enum AzureDataDiskKind \u00b6 +enum Name Type Go type Default Description Example AzureDataDiskKind string string +enum AzureDiskVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example cachingMode AzureDataDiskCachingMode AzureDataDiskCachingMode diskName string string diskName is the Name of the data disk in the blob storage diskURI string string diskURI is the URI of data disk in the blob storage fsType string string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional kind AzureDataDiskKind AzureDataDiskKind readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional AzureFileVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretName string string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string string shareName is the azure share Name Backoff \u00b6 Backoff is a backoff strategy to use within retryStrategy Properties Name Type Go type Required Default Description Example duration string string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString IntOrString maxDuration string string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy BasicAuth \u00b6 BasicAuth describes the secret selectors required for basic authentication Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector CSIVolumeSource \u00b6 Represents a source location of a volume to mount, managed by an external CSI driver Properties Name Type Go type Required Default Description Example driver string string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string string fsType to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. +optional nodePublishSecretRef LocalObjectReference LocalObjectReference readOnly boolean bool readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). +optional volumeAttributes map of string map[string]string volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. +optional Cache \u00b6 Cache is the configuration for the type of cache to be used Properties Name Type Go type Required Default Description Example configMap ConfigMapKeySelector ConfigMapKeySelector Capabilities \u00b6 Properties Name Type Go type Required Default Description Example add [] Capability []Capability Added capabilities +optional drop [] Capability []Capability Removed capabilities +optional Capability \u00b6 Capability represent POSIX capabilities type Name Type Go type Default Description Example Capability string string Capability represent POSIX capabilities type CephFSVolumeSource \u00b6 Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example monitors []string []string monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretFile string string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional CinderVolumeSource \u00b6 A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional secretRef LocalObjectReference LocalObjectReference volumeID string string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md ClientCertAuth \u00b6 ClientCertAuth holds necessary information for client authentication via certificates Properties Name Type Go type Required Default Description Example clientCertSecret SecretKeySelector SecretKeySelector clientKeySecret SecretKeySelector SecretKeySelector ConfigMapEnvSource \u00b6 The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap must be defined +optional ConfigMapKeySelector \u00b6 +structType=atomic Properties Name Type Go type Required Default Description Example key string string The key to select. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap or its key must be defined +optional ConfigMapProjection \u00b6 The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional ConfigMapVolumeSource \u00b6 The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional Container \u00b6 Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional ContainerNode \u00b6 Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional dependencies []string []string env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional ContainerPort \u00b6 Properties Name Type Go type Required Default Description Example containerPort int32 (formatted integer) int32 Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string string What host IP to bind the external port to. +optional hostPort int32 (formatted integer) int32 Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. +optional name string string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. +optional protocol Protocol Protocol ContainerSetRetryStrategy \u00b6 Properties Name Type Go type Required Default Description Example duration string string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString IntOrString ContainerSetTemplate \u00b6 Properties Name Type Go type Required Default Description Example containers [] ContainerNode []*ContainerNode retryStrategy ContainerSetRetryStrategy ContainerSetRetryStrategy volumeMounts [] VolumeMount []*VolumeMount ContinueOn \u00b6 It can be specified if the workflow should continue when the pod errors, fails or both. Properties Name Type Go type Required Default Description Example error boolean bool +optional failed boolean bool +optional Counter \u00b6 Counter is a Counter prometheus metric Properties Name Type Go type Required Default Description Example value string string Value is the value of the metric CreateS3BucketOptions \u00b6 CreateS3BucketOptions options used to determine automatic automatic bucket-creation process Properties Name Type Go type Required Default Description Example objectLocking boolean bool ObjectLocking Enable object locking DAGTask \u00b6 DAGTask represents a node in the graph during DAG execution Properties Name Type Go type Required Default Description Example arguments Arguments Arguments continueOn ContinueOn ContinueOn dependencies []string []string Dependencies are name of other targets which this depends on depends string string Depends are name of other targets which this depends on hooks LifecycleHooks LifecycleHooks inline Template Template name string string Name is the name of the target onExit string string OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template. DEPRECATED: Use Hooks[exit].Template instead. template string string Name of template to execute templateRef TemplateRef TemplateRef when string string When is an expression in which the task should conditionally execute withItems [] Item []Item WithItems expands a task into multiple parallel tasks from the items in the list withParam string string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence Sequence DAGTemplate \u00b6 DAGTemplate is a template subtype for directed acyclic graph templates Properties Name Type Go type Required Default Description Example failFast boolean bool This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string string Target are one or more names of targets to execute in a DAG tasks [] DAGTask []*DAGTask Tasks are a list of DAG tasks +patchStrategy=merge +patchMergeKey=name Data \u00b6 Data is a data template Properties Name Type Go type Required Default Description Example source DataSource DataSource transformation Transformation Transformation DataSource \u00b6 DataSource sources external data into a data template Properties Name Type Go type Required Default Description Example artifactPaths ArtifactPaths ArtifactPaths DownwardAPIProjection \u00b6 Note that this is identical to a downwardAPI volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of DownwardAPIVolume file +optional DownwardAPIVolumeFile \u00b6 DownwardAPIVolumeFile represents information to create the file containing the pod field Properties Name Type Go type Required Default Description Example fieldRef ObjectFieldSelector ObjectFieldSelector mode int32 (formatted integer) int32 Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector ResourceFieldSelector DownwardAPIVolumeSource \u00b6 Downward API volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of downward API volume file +optional Duration \u00b6 Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. interface{} EmptyDirVolumeSource \u00b6 Empty directory volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example medium StorageMedium StorageMedium sizeLimit Quantity Quantity EnvFromSource \u00b6 EnvFromSource represents the source of a set of ConfigMaps Properties Name Type Go type Required Default Description Example configMapRef ConfigMapEnvSource ConfigMapEnvSource prefix string string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. +optional secretRef SecretEnvSource SecretEnvSource EnvVar \u00b6 Properties Name Type Go type Required Default Description Example name string string Name of the environment variable. Must be a C_IDENTIFIER. value string string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". +optional valueFrom EnvVarSource EnvVarSource EnvVarSource \u00b6 Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector fieldRef ObjectFieldSelector ObjectFieldSelector resourceFieldRef ResourceFieldSelector ResourceFieldSelector secretKeyRef SecretKeySelector SecretKeySelector EphemeralVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example volumeClaimTemplate PersistentVolumeClaimTemplate PersistentVolumeClaimTemplate ExecAction \u00b6 Properties Name Type Go type Required Default Description Example command []string []string Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions (' ', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. +optional ExecuteTemplateArgs \u00b6 Properties Name Type Go type Required Default Description Example template Template Template \u2713 workflow Workflow Workflow \u2713 ExecuteTemplateReply \u00b6 Properties Name Type Go type Required Default Description Example node NodeResult NodeResult requeue Duration Duration ExecutorConfig \u00b6 Properties Name Type Go type Required Default Description Example serviceAccountName string string ServiceAccountName specifies the service account name of the executor container. FCVolumeSource \u00b6 Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine +optional lun int32 (formatted integer) int32 lun is Optional: FC target lun number +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional targetWWNs []string []string targetWWNs is Optional: FC target worldwide names (WWNs) +optional wwids []string []string wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. +optional FieldsV1 \u00b6 Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff +protobuf.options.(gogoproto.goproto_stringer)=false interface{} FlexVolumeSource \u00b6 FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Properties Name Type Go type Required Default Description Example driver string string driver is the name of the driver to use for this volume. fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. +optional options map of string map[string]string options is Optional: this field holds extra command options if any. +optional readOnly boolean bool readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference FlockerVolumeSource \u00b6 One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example datasetName string string datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated +optional datasetUUID string string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset +optional GCEPersistentDiskVolumeSource \u00b6 A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional pdName string string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional GCSArtifact \u00b6 GCSArtifact is the location of a GCS artifact Properties Name Type Go type Required Default Description Example bucket string string Bucket is the name of the bucket key string string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector SecretKeySelector GRPCAction \u00b6 Properties Name Type Go type Required Default Description Example port int32 (formatted integer) int32 Port number of the gRPC service. Number must be in the range 1 to 65535. service string string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC. +optional +default=\"\" | | Gauge \u00b6 Gauge is a Gauge prometheus metric Properties Name Type Go type Required Default Description Example operation GaugeOperation GaugeOperation realtime boolean bool Realtime emits this metric in real time if applicable value string string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric GaugeOperation \u00b6 Name Type Go type Default Description Example GaugeOperation string string GitArtifact \u00b6 GitArtifact is the location of an git artifact Properties Name Type Go type Required Default Description Example branch string string Branch is the branch to fetch when SingleBranch is enabled depth uint64 (formatted integer) uint64 Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean bool DisableSubmodules disables submodules during git clone fetch []string []string Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean bool InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector SecretKeySelector repo string string Repo is the git repository revision string string Revision is the git commit, tag, branch to checkout singleBranch boolean bool SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector GitRepoVolumeSource \u00b6 DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Properties Name Type Go type Required Default Description Example directory string string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. +optional repository string string repository is the URL revision string string revision is the commit hash for the specified revision. +optional GlusterfsVolumeSource \u00b6 Glusterfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example endpoints string string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean bool readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod +optional HDFSArtifact \u00b6 HDFSArtifact is the location of an HDFS artifact Properties Name Type Go type Required Default Description Example addresses []string []string Addresses is accessible addresses of HDFS name nodes force boolean bool Force copies a file forcibly even if it exists hdfsUser string string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector SecretKeySelector krbConfigConfigMap ConfigMapKeySelector ConfigMapKeySelector krbKeytabSecret SecretKeySelector SecretKeySelector krbRealm string string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string string Path is a file path in HDFS HTTP \u00b6 Properties Name Type Go type Required Default Description Example body string string Body is content of the HTTP Request bodyFrom HTTPBodySource HTTPBodySource headers HTTPHeaders HTTPHeaders insecureSkipVerify boolean bool InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string string Method is HTTP methods for HTTP Request successCondition string string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds int64 (formatted integer) int64 TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string string URL of the HTTP Request HTTPArtifact \u00b6 HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Properties Name Type Go type Required Default Description Example auth HTTPAuth HTTPAuth headers [] Header []*Header Headers are an optional list of headers to send with HTTP requests for artifacts url string string URL of the artifact HTTPAuth \u00b6 Properties Name Type Go type Required Default Description Example basicAuth BasicAuth BasicAuth clientCert ClientCertAuth ClientCertAuth oauth2 OAuth2Auth OAuth2Auth HTTPBodySource \u00b6 Properties Name Type Go type Required Default Description Example bytes []uint8 (formatted integer) []uint8 HTTPGetAction \u00b6 Properties Name Type Go type Required Default Description Example host string string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. +optional httpHeaders [] HTTPHeader []*HTTPHeader Custom headers to set in the request. HTTP allows repeated headers. +optional path string string Path to access on the HTTP server. +optional port IntOrString IntOrString scheme URIScheme URIScheme HTTPHeader \u00b6 Properties Name Type Go type Required Default Description Example name string string value string string valueFrom HTTPHeaderSource HTTPHeaderSource HTTPHeaderSource \u00b6 Properties Name Type Go type Required Default Description Example secretKeyRef SecretKeySelector SecretKeySelector HTTPHeaders \u00b6 [] HTTPHeader Header \u00b6 Header indicate a key-value request header to be used when fetching artifacts over HTTP Properties Name Type Go type Required Default Description Example name string string Name is the header name value string string Value is the literal value to use for the header Histogram \u00b6 Histogram is a Histogram prometheus metric Properties Name Type Go type Required Default Description Example buckets [] Amount []Amount Buckets is a list of bucket divisors for the histogram value string string Value is the value of the metric HostAlias \u00b6 HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Properties Name Type Go type Required Default Description Example hostnames []string []string Hostnames for the above IP address. ip string string IP address of the host file entry. HostPathType \u00b6 +enum Name Type Go type Default Description Example HostPathType string string +enum HostPathVolumeSource \u00b6 Host path volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type HostPathType HostPathType ISCSIVolumeSource \u00b6 ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example chapAuthDiscovery boolean bool chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication +optional chapAuthSession boolean bool chapAuthSession defines whether support iSCSI Session CHAP authentication +optional fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine +optional initiatorName string string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. +optional iqn string string iqn is the target iSCSI Qualified Name. iscsiInterface string string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). +optional lun int32 (formatted integer) int32 lun represents iSCSI Target Lun number. portals []string []string portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. +optional secretRef LocalObjectReference LocalObjectReference targetPortal string string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). Inputs \u00b6 Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters are a list of parameters passed as inputs +patchStrategy=merge +patchMergeKey=name IntOrString \u00b6 +protobuf=true +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:openapi-gen=true Properties Name Type Go type Required Default Description Example IntVal int32 (formatted integer) int32 StrVal string string Type Type Type Item \u00b6 +protobuf.options.(gogoproto.goproto_stringer)=false +kubebuilder:validation:Type=object interface{} KeyToPath \u00b6 Properties Name Type Go type Required Default Description Example key string string key is the key to project. mode int32 (formatted integer) int32 mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. LabelSelector \u00b6 A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] LabelSelectorRequirement []*LabelSelectorRequirement matchExpressions is a list of label selector requirements. The requirements are ANDed. +optional matchLabels map of string map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed. +optional LabelSelectorOperator \u00b6 Name Type Go type Default Description Example LabelSelectorOperator string string LabelSelectorRequirement \u00b6 A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string key is the label key that the selector applies to. +patchMergeKey=key +patchStrategy=merge operator LabelSelectorOperator LabelSelectorOperator values []string []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +optional Lifecycle \u00b6 Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Properties Name Type Go type Required Default Description Example postStart LifecycleHandler LifecycleHandler preStop LifecycleHandler LifecycleHandler LifecycleHandler \u00b6 LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction httpGet HTTPGetAction HTTPGetAction tcpSocket TCPSocketAction TCPSocketAction LifecycleHook \u00b6 Properties Name Type Go type Required Default Description Example arguments Arguments Arguments expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef LifecycleHooks \u00b6 LifecycleHooks LocalObjectReference \u00b6 LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional ManagedFieldsEntry \u00b6 ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Properties Name Type Go type Required Default Description Example apiVersion string string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 manager string string Manager is an identifier of the workflow managing these fields. operation ManagedFieldsOperationType ManagedFieldsOperationType subresource string string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time ManagedFieldsOperationType \u00b6 Name Type Go type Default Description Example ManagedFieldsOperationType string string ManifestFrom \u00b6 Properties Name Type Go type Required Default Description Example artifact Artifact Artifact Memoize \u00b6 Memoization enables caching for the Outputs of the template Properties Name Type Go type Required Default Description Example cache Cache Cache key string string Key is the key to use as the caching key maxAge string string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored. Metadata \u00b6 Pod metdata Properties Name Type Go type Required Default Description Example annotations map of string map[string]string labels map of string map[string]string MetricLabel \u00b6 MetricLabel is a single label for a prometheus metric Properties Name Type Go type Required Default Description Example key string string value string string Metrics \u00b6 Metrics are a list of metrics emitted from a Workflow/Template Properties Name Type Go type Required Default Description Example prometheus [] Prometheus []*Prometheus Prometheus is a list of prometheus metrics to be emitted MountPropagationMode \u00b6 +enum Name Type Go type Default Description Example MountPropagationMode string string +enum Mutex \u00b6 Mutex holds Mutex configuration Properties Name Type Go type Required Default Description Example name string string name of the mutex namespace string string \"[namespace of workflow]\" NFSVolumeSource \u00b6 NFS volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean bool readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs +optional server string string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs NodeAffinity \u00b6 Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] PreferredSchedulingTerm []*PreferredSchedulingTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution NodeSelector NodeSelector NodePhase \u00b6 Name Type Go type Default Description Example NodePhase string string NodeResult \u00b6 Properties Name Type Go type Required Default Description Example message string string outputs Outputs Outputs phase NodePhase NodePhase progress Progress Progress NodeSelector \u00b6 A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. +structType=atomic Properties Name Type Go type Required Default Description Example nodeSelectorTerms [] NodeSelectorTerm []*NodeSelectorTerm Required. A list of node selector terms. The terms are ORed. NodeSelectorOperator \u00b6 A node selector operator is the set of operators that can be used in a node selector requirement. +enum Name Type Go type Default Description Example NodeSelectorOperator string string A node selector operator is the set of operators that can be used in a node selector requirement. +enum NodeSelectorRequirement \u00b6 A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string The label key that the selector applies to. operator NodeSelectorOperator NodeSelectorOperator values []string []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +optional NodeSelectorTerm \u00b6 A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's labels. +optional matchFields [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's fields. +optional NoneStrategy \u00b6 NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. interface{} OAuth2Auth \u00b6 OAuth2Auth holds all information for client authentication via OAuth2 tokens Properties Name Type Go type Required Default Description Example clientIDSecret SecretKeySelector SecretKeySelector clientSecretSecret SecretKeySelector SecretKeySelector endpointParams [] OAuth2EndpointParam []*OAuth2EndpointParam scopes []string []string tokenURLSecret SecretKeySelector SecretKeySelector OAuth2EndpointParam \u00b6 EndpointParam is for requesting optional fields that should be sent in the oauth request Properties Name Type Go type Required Default Description Example key string string Name is the header name value string string Value is the literal value to use for the header OSSArtifact \u00b6 OSSArtifact is the location of an Alibaba Cloud OSS artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket createBucketIfNotPresent boolean bool CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string string Endpoint is the hostname of the bucket endpoint key string string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule OSSLifecycleRule secretKeySecret SecretKeySelector SecretKeySelector securityToken string string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults. OSSLifecycleRule \u00b6 OSSLifecycleRule specifies how to manage bucket's lifecycle Properties Name Type Go type Required Default Description Example markDeletionAfterDays int32 (formatted integer) int32 MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays int32 (formatted integer) int32 MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type ObjectFieldSelector \u00b6 +structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". +optional fieldPath string string Path of the field to select in the specified API version. ObjectMeta \u00b6 Properties Name Type Go type Required Default Description Example name string string namespace string string uid string string Outputs \u00b6 Outputs hold parameters, artifacts, and results from a step Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts exitCode string string ExitCode holds the exit code of a script template parameters [] Parameter []*Parameter Parameters holds the list of output parameters produced by a step +patchStrategy=merge +patchMergeKey=name result string string Result holds the result (stdout) of a script template OwnerReference \u00b6 OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. +structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string API version of the referent. blockOwnerDeletion boolean bool If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. +optional controller boolean bool If true, this reference points to the managing controller. +optional kind string string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid UID UID ParallelSteps \u00b6 +kubebuilder:validation:Type=array interface{} Parameter \u00b6 Parameter indicate a passed string parameter to a service template with an optional default value Properties Name Type Go type Required Default Description Example default AnyString AnyString description AnyString AnyString enum [] AnyString []AnyString Enum holds a list of string values to choose from, for the actual value of the parameter globalName string string GlobalName exports an output parameter to the global scope, making it available as '{{workflow.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string string Name is the parameter name value AnyString AnyString valueFrom ValueFrom ValueFrom PersistentVolumeAccessMode \u00b6 +enum Name Type Go type Default Description Example PersistentVolumeAccessMode string string +enum PersistentVolumeClaimSpec \u00b6 PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Properties Name Type Go type Required Default Description Example accessModes [] PersistentVolumeAccessMode []PersistentVolumeAccessMode accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 +optional dataSource TypedLocalObjectReference TypedLocalObjectReference dataSourceRef TypedLocalObjectReference TypedLocalObjectReference resources ResourceRequirements ResourceRequirements selector LabelSelector LabelSelector storageClassName string string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 +optional volumeMode PersistentVolumeMode PersistentVolumeMode volumeName string string volumeName is the binding reference to the PersistentVolume backing this claim. +optional PersistentVolumeClaimTemplate \u00b6 PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Properties Name Type Go type Required Default Description Example annotations map of string map[string]string Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations +optional clusterName string string Deprecated: ClusterName is a legacy field that was always cleared by the system and never used; it will be removed completely in 1.25. The name in the go struct is changed to help clients detect accidental use. +optional | | | creationTimestamp | Time | Time | | | | | | deletionGracePeriodSeconds | int64 (formatted integer)| int64 | | | Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. +optional | | | deletionTimestamp | Time | Time | | | | | | finalizers | []string| []string | | | Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. +optional +patchStrategy=merge | | | generateName | string| string | | | GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency +optional | | | generation | int64 (formatted integer)| int64 | | | A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. +optional | | | labels | map of string| map[string]string | | | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels +optional | | | managedFields | [] ManagedFieldsEntry | []*ManagedFieldsEntry | | | ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. +optional | | | name | string| string | | | Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names +optional | | | namespace | string| string | | | Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces +optional | | | ownerReferences | [] OwnerReference | []*OwnerReference | | | List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. +optional +patchMergeKey=uid +patchStrategy=merge | | | resourceVersion | string| string | | | An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency +optional | | | selfLink | string| string | | | Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. +optional | | | spec | PersistentVolumeClaimSpec | PersistentVolumeClaimSpec | | | | | | uid | UID | UID | | | | | PersistentVolumeClaimVolumeSource \u00b6 This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Properties Name Type Go type Required Default Description Example claimName string string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean bool readOnly Will force the ReadOnly setting in VolumeMounts. Default false. +optional PersistentVolumeMode \u00b6 +enum Name Type Go type Default Description Example PersistentVolumeMode string string +enum PhotonPersistentDiskVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string string pdID is the ID that identifies Photon Controller persistent disk Plugin \u00b6 Plugin is an Object with exactly one key interface{} PodAffinity \u00b6 Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional PodAffinityTerm \u00b6 Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running Properties Name Type Go type Required Default Description Example labelSelector LabelSelector LabelSelector namespaceSelector LabelSelector LabelSelector namespaces []string []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\". +optional topologyKey string string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. PodAntiAffinity \u00b6 Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional PodFSGroupChangePolicy \u00b6 PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum Name Type Go type Default Description Example PodFSGroupChangePolicy string string PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum PodSecurityContext \u00b6 Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Properties Name Type Go type Required Default Description Example fsGroup int64 (formatted integer) int64 A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: The owning GID will be the FSGroup The setgid bit is set (new files created in the volume will be owned by FSGroup) The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. +optional | | | fsGroupChangePolicy | PodFSGroupChangePolicy | PodFSGroupChangePolicy | | | | | | runAsGroup | int64 (formatted integer)| int64 | | | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | runAsNonRoot | boolean| bool | | | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional | | | runAsUser | int64 (formatted integer)| int64 | | | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | seLinuxOptions | SELinuxOptions | SELinuxOptions | | | | | | seccompProfile | SeccompProfile | SeccompProfile | | | | | | supplementalGroups | []int64 (formatted integer)| []int64 | | | A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. +optional | | | sysctls | [] Sysctl | []*Sysctl | | | Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. +optional | | | windowsOptions | WindowsSecurityContextOptions | WindowsSecurityContextOptions | | | | | PortworxVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional volumeID string string volumeID uniquely identifies a Portworx volume PreferredSchedulingTerm \u00b6 An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Properties Name Type Go type Required Default Description Example preference NodeSelectorTerm NodeSelectorTerm weight int32 (formatted integer) int32 Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. Probe \u00b6 Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction failureThreshold int32 (formatted integer) int32 Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. +optional grpc GRPCAction GRPCAction httpGet HTTPGetAction HTTPGetAction initialDelaySeconds int32 (formatted integer) int32 Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional periodSeconds int32 (formatted integer) int32 How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. +optional successThreshold int32 (formatted integer) int32 Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. +optional tcpSocket TCPSocketAction TCPSocketAction terminationGracePeriodSeconds int64 (formatted integer) int64 Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. +optional timeoutSeconds int32 (formatted integer) int32 Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional ProcMountType \u00b6 +enum Name Type Go type Default Description Example ProcMountType string string +enum Progress \u00b6 Name Type Go type Default Description Example Progress string string ProjectedVolumeSource \u00b6 Represents a projected volume source Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional sources [] VolumeProjection []*VolumeProjection sources is the list of volume projections +optional Prometheus \u00b6 Prometheus is a prometheus metric to be emitted Properties Name Type Go type Required Default Description Example counter Counter Counter gauge Gauge Gauge help string string Help is a string that describes the metric histogram Histogram Histogram labels [] MetricLabel []*MetricLabel Labels is a list of metric labels name string string Name is the name of the metric when string string When is a conditional statement that decides when to emit the metric Protocol \u00b6 +enum Name Type Go type Default Description Example Protocol string string +enum PullPolicy \u00b6 PullPolicy describes a policy for if/when to pull a container image +enum Name Type Go type Default Description Example PullPolicy string string PullPolicy describes a policy for if/when to pull a container image +enum Quantity \u00b6 The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. +protobuf=true +protobuf.embed=string +protobuf.options.marshal=false +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:deepcopy-gen=true +k8s:openapi-gen=true interface{} QuobyteVolumeSource \u00b6 Quobyte volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example group string string group to map volume access to Default is no group +optional readOnly boolean bool readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. +optional registry string string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin +optional user string string user to map volume access to Defaults to serivceaccount user +optional volume string string volume is a string that references an already created Quobyte volume by name. RBDVolumeSource \u00b6 RBD volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine +optional image string string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional monitors []string []string monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional RawArtifact \u00b6 RawArtifact allows raw string content to be placed as an artifact in a container Properties Name Type Go type Required Default Description Example data string string Data is the string contents of the artifact ResourceFieldSelector \u00b6 ResourceFieldSelector represents container resources (cpu, memory) and their output format +structType=atomic Properties Name Type Go type Required Default Description Example containerName string string Container name: required for volumes, optional for env vars +optional divisor Quantity Quantity resource string string Required: resource to select ResourceList \u00b6 ResourceList ResourceRequirements \u00b6 Properties Name Type Go type Required Default Description Example limits ResourceList ResourceList requests ResourceList ResourceList ResourceTemplate \u00b6 ResourceTemplate is a template subtype to manipulate kubernetes resources Properties Name Type Go type Required Default Description Example action string string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags []string []string Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom mergeStrategy string string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean bool SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step RetryAffinity \u00b6 Properties Name Type Go type Required Default Description Example nodeAntiAffinity RetryNodeAntiAffinity RetryNodeAntiAffinity RetryNodeAntiAffinity \u00b6 In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\". interface{} RetryPolicy \u00b6 Name Type Go type Default Description Example RetryPolicy string string RetryStrategy \u00b6 RetryStrategy provides controls on how to retry a workflow step Properties Name Type Go type Required Default Description Example affinity RetryAffinity RetryAffinity backoff Backoff Backoff expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString IntOrString retryPolicy RetryPolicy RetryPolicy S3Artifact \u00b6 S3Artifact is the location of an S3 artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket caSecret SecretKeySelector SecretKeySelector createBucketIfNotPresent CreateS3BucketOptions CreateS3BucketOptions encryptionOptions S3EncryptionOptions S3EncryptionOptions endpoint string string Endpoint is the hostname of the bucket endpoint insecure boolean bool Insecure will connect to the service with TLS key string string Key is the key in the bucket where the artifact resides region string string Region contains the optional bucket region roleARN string string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySelector useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults. S3EncryptionOptions \u00b6 S3EncryptionOptions used to determine encryption options during s3 operations Properties Name Type Go type Required Default Description Example enableEncryption boolean bool EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector SecretKeySelector SELinuxOptions \u00b6 SELinuxOptions are the labels to be applied to the container Properties Name Type Go type Required Default Description Example level string string Level is SELinux level label that applies to the container. +optional role string string Role is a SELinux role label that applies to the container. +optional type string string Type is a SELinux type label that applies to the container. +optional user string string User is a SELinux user label that applies to the container. +optional ScaleIOVolumeSource \u00b6 ScaleIOVolumeSource represents a persistent ScaleIO volume Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". +optional gateway string string gateway is the host address of the ScaleIO API Gateway. protectionDomain string string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. +optional readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference sslEnabled boolean bool sslEnabled Flag enable/disable SSL communication with Gateway, default false +optional storageMode string string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. +optional storagePool string string storagePool is the ScaleIO Storage Pool associated with the protection domain. +optional system string string system is the name of the storage system as configured in ScaleIO. volumeName string string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. ScriptTemplate \u00b6 ScriptTemplate is a template subtype to enable scripting through code steps Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext source string string Source contains the source code of the script to execute startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional SeccompProfile \u00b6 Only one profile source may be set. +union Properties Name Type Go type Required Default Description Example localhostProfile string string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". +optional type SeccompProfileType SeccompProfileType SeccompProfileType \u00b6 +enum Name Type Go type Default Description Example SeccompProfileType string string +enum SecretEnvSource \u00b6 The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret must be defined +optional SecretKeySelector \u00b6 +structType=atomic Properties Name Type Go type Required Default Description Example key string string The key of the secret to select from. Must be a valid secret key. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret or its key must be defined +optional SecretProjection \u00b6 The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional field specify whether the Secret or its key must be defined +optional SecretVolumeSource \u00b6 The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional optional boolean bool optional field specify whether the Secret or its keys must be defined +optional secretName string string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret +optional SecurityContext \u00b6 Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Properties Name Type Go type Required Default Description Example allowPrivilegeEscalation boolean bool AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. +optional capabilities Capabilities Capabilities privileged boolean bool Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. +optional procMount ProcMountType ProcMountType readOnlyRootFilesystem boolean bool Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. +optional runAsGroup int64 (formatted integer) int64 The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional runAsNonRoot boolean bool Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional runAsUser int64 (formatted integer) int64 The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional seLinuxOptions SELinuxOptions SELinuxOptions seccompProfile SeccompProfile SeccompProfile windowsOptions WindowsSecurityContextOptions WindowsSecurityContextOptions SemaphoreRef \u00b6 SemaphoreRef is a reference of Semaphore Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector namespace string string \"[namespace of workflow]\" Sequence \u00b6 Sequence expands a workflow step into numeric range Properties Name Type Go type Required Default Description Example count IntOrString IntOrString end IntOrString IntOrString format string string Format is a printf format string to format the value in the sequence start IntOrString IntOrString ServiceAccountTokenProjection \u00b6 ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Properties Name Type Go type Required Default Description Example audience string string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. +optional expirationSeconds int64 (formatted integer) int64 expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. +optional path string string path is the path relative to the mount point of the file to project the token into. StorageMedium \u00b6 Name Type Go type Default Description Example StorageMedium string string StorageOSVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference volumeName string string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. +optional SuppliedValueFrom \u00b6 interface{} SuspendTemplate \u00b6 SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Properties Name Type Go type Required Default Description Example duration string string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" Synchronization \u00b6 Synchronization holds synchronization lock configuration Properties Name Type Go type Required Default Description Example mutex Mutex Mutex semaphore SemaphoreRef SemaphoreRef Sysctl \u00b6 Sysctl defines a kernel parameter to be set Properties Name Type Go type Required Default Description Example name string string Name of a property to set value string string Value of a property to set TCPSocketAction \u00b6 TCPSocketAction describes an action based on opening a socket Properties Name Type Go type Required Default Description Example host string string Optional: Host name to connect to, defaults to the pod IP. +optional port IntOrString IntOrString TaintEffect \u00b6 +enum Name Type Go type Default Description Example TaintEffect string string +enum TarStrategy \u00b6 TarStrategy will tar and gzip the file or directory when saving Properties Name Type Go type Required Default Description Example compressionLevel int32 (formatted integer) int32 CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression. Template \u00b6 Template is a reusable and composable unit of execution in a workflow Properties Name Type Go type Required Default Description Example activeDeadlineSeconds IntOrString IntOrString affinity Affinity Affinity archiveLocation ArtifactLocation ArtifactLocation automountServiceAccountToken boolean bool AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container containerSet ContainerSetTemplate ContainerSetTemplate daemon boolean bool Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAGTemplate data Data Data executor ExecutorConfig ExecutorConfig failFast boolean bool FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases [] HostAlias []*HostAlias HostAliases is an optional list of hosts and IPs that will be injected into the pod spec +patchStrategy=merge +patchMergeKey=ip http HTTP HTTP initContainers [] UserContainer []*UserContainer InitContainers is a list of containers which run before the main container. +patchStrategy=merge +patchMergeKey=name inputs Inputs Inputs memoize Memoize Memoize metadata Metadata Metadata metrics Metrics Metrics name string string Name is the name of the template nodeSelector map of string map[string]string NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs parallelism int64 (formatted integer) int64 Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin podSpecPatch string string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority int32 (formatted integer) int32 Priority to apply to workflow pods. priorityClassName string string PriorityClassName to apply to workflow pods. resource ResourceTemplate ResourceTemplate retryStrategy RetryStrategy RetryStrategy schedulerName string string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. +optional script ScriptTemplate ScriptTemplate securityContext PodSecurityContext PodSecurityContext serviceAccountName string string ServiceAccountName to apply to workflow pods sidecars [] UserContainer []*UserContainer Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes +patchStrategy=merge +patchMergeKey=name steps [] ParallelSteps []ParallelSteps Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate SuspendTemplate synchronization Synchronization Synchronization timeout string string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations [] Toleration []*Toleration Tolerations to apply to workflow pods. +patchStrategy=merge +patchMergeKey=key volumes [] Volume []*Volume Volumes is a list of volumes that can be mounted by containers in a template. +patchStrategy=merge +patchMergeKey=name TemplateRef \u00b6 Properties Name Type Go type Required Default Description Example clusterScope boolean bool ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string string Name is the resource name of the template. template string string Template is the name of referred template in the resource. TerminationMessagePolicy \u00b6 +enum Name Type Go type Default Description Example TerminationMessagePolicy string string +enum Time \u00b6 +protobuf.options.marshal=false +protobuf.as=Timestamp +protobuf.options.(gogoproto.goproto_stringer)=false interface{} Toleration \u00b6 The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . Properties Name Type Go type Required Default Description Example effect TaintEffect TaintEffect key string string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. +optional operator TolerationOperator TolerationOperator tolerationSeconds int64 (formatted integer) int64 TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. +optional value string string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. +optional TolerationOperator \u00b6 +enum Name Type Go type Default Description Example TolerationOperator string string +enum Transformation \u00b6 [] TransformationStep TransformationStep \u00b6 Properties Name Type Go type Required Default Description Example expression string string Expression defines an expr expression to apply Type \u00b6 Name Type Go type Default Description Example Type int64 (formatted integer) int64 TypedLocalObjectReference \u00b6 TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example apiGroup string string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. +optional kind string string Kind is the type of resource being referenced name string string Name is the name of resource being referenced UID \u00b6 UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. Name Type Go type Default Description Example UID string string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. URIScheme \u00b6 URIScheme identifies the scheme used for connection to a host for Get actions +enum Name Type Go type Default Description Example URIScheme string string URIScheme identifies the scheme used for connection to a host for Get actions +enum UserContainer \u00b6 Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe mirrorVolumeMounts boolean bool MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional ValueFrom \u00b6 ValueFrom describes a location in which to obtain the value to a parameter Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector default AnyString AnyString event string string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string string JQFilter expression against the resource object in resource templates jsonPath string string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom SuppliedValueFrom Volume \u00b6 Properties Name Type Go type Required Default Description Example awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStoreVolumeSource azureDisk AzureDiskVolumeSource AzureDiskVolumeSource azureFile AzureFileVolumeSource AzureFileVolumeSource cephfs CephFSVolumeSource CephFSVolumeSource cinder CinderVolumeSource CinderVolumeSource configMap ConfigMapVolumeSource ConfigMapVolumeSource csi CSIVolumeSource CSIVolumeSource downwardAPI DownwardAPIVolumeSource DownwardAPIVolumeSource emptyDir EmptyDirVolumeSource EmptyDirVolumeSource ephemeral EphemeralVolumeSource EphemeralVolumeSource fc FCVolumeSource FCVolumeSource flexVolume FlexVolumeSource FlexVolumeSource flocker FlockerVolumeSource FlockerVolumeSource gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDiskVolumeSource gitRepo GitRepoVolumeSource GitRepoVolumeSource glusterfs GlusterfsVolumeSource GlusterfsVolumeSource hostPath HostPathVolumeSource HostPathVolumeSource iscsi ISCSIVolumeSource ISCSIVolumeSource name string string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFSVolumeSource persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDiskVolumeSource portworxVolume PortworxVolumeSource PortworxVolumeSource projected ProjectedVolumeSource ProjectedVolumeSource quobyte QuobyteVolumeSource QuobyteVolumeSource rbd RBDVolumeSource RBDVolumeSource scaleIO ScaleIOVolumeSource ScaleIOVolumeSource secret SecretVolumeSource SecretVolumeSource storageos StorageOSVolumeSource StorageOSVolumeSource vsphereVolume VsphereVirtualDiskVolumeSource VsphereVirtualDiskVolumeSource VolumeDevice \u00b6 Properties Name Type Go type Required Default Description Example devicePath string string devicePath is the path inside of the container that the device will be mapped to. name string string name must match the name of a persistentVolumeClaim in the pod VolumeMount \u00b6 Properties Name Type Go type Required Default Description Example mountPath string string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation MountPropagationMode MountPropagationMode name string string This must match the Name of a Volume. readOnly boolean bool Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. +optional subPath string string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). +optional subPathExpr string string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. +optional VolumeProjection \u00b6 Projection that may be projected along with other supported volume types Properties Name Type Go type Required Default Description Example configMap ConfigMapProjection ConfigMapProjection downwardAPI DownwardAPIProjection DownwardAPIProjection secret SecretProjection SecretProjection serviceAccountToken ServiceAccountTokenProjection ServiceAccountTokenProjection VsphereVirtualDiskVolumeSource \u00b6 Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional storagePolicyID string string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. +optional storagePolicyName string string storagePolicyName is the storage Policy Based Management (SPBM) profile name. +optional volumePath string string volumePath is the path that identifies vSphere volume vmdk WeightedPodAffinityTerm \u00b6 The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Properties Name Type Go type Required Default Description Example podAffinityTerm PodAffinityTerm PodAffinityTerm weight int32 (formatted integer) int32 weight associated with matching the corresponding podAffinityTerm, in the range 1-100. WindowsSecurityContextOptions \u00b6 Properties Name Type Go type Required Default Description Example gmsaCredentialSpec string string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. +optional gmsaCredentialSpecName string string GMSACredentialSpecName is the name of the GMSA credential spec to use. +optional hostProcess boolean bool HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. +optional runAsUserName string string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional Workflow \u00b6 Properties Name Type Go type Required Default Description Example metadata ObjectMeta ObjectMeta \u2713 ZipStrategy \u00b6 ZipStrategy will unzip zipped input artifacts interface{}","title":"The API for an executor plugin."},{"location":"executor_swagger/#the-api-for-an-executor-plugin","text":"","title":"The API for an executor plugin."},{"location":"executor_swagger/#informations","text":"","title":"Informations"},{"location":"executor_swagger/#version","text":"0.0.1","title":"Version"},{"location":"executor_swagger/#content-negotiation","text":"","title":"Content negotiation"},{"location":"executor_swagger/#uri-schemes","text":"http","title":"URI Schemes"},{"location":"executor_swagger/#consumes","text":"application/json","title":"Consumes"},{"location":"executor_swagger/#produces","text":"application/json","title":"Produces"},{"location":"executor_swagger/#all-endpoints","text":"","title":"All endpoints"},{"location":"executor_swagger/#operations","text":"Method URI Name Summary POST /api/v1/template.execute execute template","title":"operations"},{"location":"executor_swagger/#paths","text":"","title":"Paths"},{"location":"executor_swagger/#execute-template-executetemplate","text":"POST /api/v1/template.execute","title":" execute template (executeTemplate)"},{"location":"executor_swagger/#parameters","text":"Name Source Type Go type Separator Required Default Description Body body ExecuteTemplateArgs models.ExecuteTemplateArgs \u2713","title":"Parameters"},{"location":"executor_swagger/#all-responses","text":"Code Status Description Has headers Schema 200 OK schema","title":"All responses"},{"location":"executor_swagger/#responses","text":"","title":"Responses"},{"location":"executor_swagger/#200","text":"Status: OK","title":" 200"},{"location":"executor_swagger/#schema","text":"ExecuteTemplateReply","title":" Schema"},{"location":"executor_swagger/#models","text":"","title":"Models"},{"location":"executor_swagger/#awselasticblockstorevolumesource","text":"An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). +optional readOnly boolean bool readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore +optional volumeID string string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore","title":" AWSElasticBlockStoreVolumeSource"},{"location":"executor_swagger/#affinity","text":"Properties Name Type Go type Required Default Description Example nodeAffinity NodeAffinity NodeAffinity podAffinity PodAffinity PodAffinity podAntiAffinity PodAntiAffinity PodAntiAffinity","title":" Affinity"},{"location":"executor_swagger/#amount","text":"+kubebuilder:validation:Type=number interface{}","title":" Amount"},{"location":"executor_swagger/#anystring","text":"It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric. Name Type Go type Default Description Example AnyString string string It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. It will marshall back to string - marshalling is not symmetric.","title":" AnyString"},{"location":"executor_swagger/#archivestrategy","text":"ArchiveStrategy describes how to archive files/directory when saving artifacts Properties Name Type Go type Required Default Description Example none NoneStrategy NoneStrategy tar TarStrategy TarStrategy zip ZipStrategy ZipStrategy","title":" ArchiveStrategy"},{"location":"executor_swagger/#arguments","text":"Arguments to a template Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters is the list of parameters to pass to the template or workflow +patchStrategy=merge +patchMergeKey=name","title":" Arguments"},{"location":"executor_swagger/#artifact","text":"Artifact indicates an artifact to place at a specified path Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source","title":" Artifact"},{"location":"executor_swagger/#artifactgc","text":"ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Properties Name Type Go type Required Default Description Example podMetadata Metadata Metadata serviceAccountName string string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy ArtifactGCStrategy ArtifactGCStrategy","title":" ArtifactGC"},{"location":"executor_swagger/#artifactgcstrategy","text":"Name Type Go type Default Description Example ArtifactGCStrategy string string","title":" ArtifactGCStrategy"},{"location":"executor_swagger/#artifactlocation","text":"It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Properties Name Type Go type Required Default Description Example archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact oss OSSArtifact OSSArtifact raw RawArtifact RawArtifact s3 S3Artifact S3Artifact","title":" ArtifactLocation"},{"location":"executor_swagger/#artifactpaths","text":"ArtifactPaths expands a step from a collection of artifacts Properties Name Type Go type Required Default Description Example archive ArchiveStrategy ArchiveStrategy archiveLogs boolean bool ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC artifactory ArtifactoryArtifact ArtifactoryArtifact azure AzureArtifact AzureArtifact deleted boolean bool Has this been deleted? from string string From allows an artifact to reference an artifact from a previous step fromExpression string string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCSArtifact git GitArtifact GitArtifact globalName string string GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFSArtifact http HTTPArtifact HTTPArtifact mode int32 (formatted integer) int32 mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string string name of the artifact. must be unique within a template's inputs/outputs. optional boolean bool Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSSArtifact path string string Path is the container path to the artifact raw RawArtifact RawArtifact recurseMode boolean bool If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3Artifact subPath string string SubPath allows an artifact to be sourced from a subpath within the specified source","title":" ArtifactPaths"},{"location":"executor_swagger/#artifactoryartifact","text":"ArtifactoryArtifact is the location of an artifactory artifact Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector url string string URL of the artifact usernameSecret SecretKeySelector SecretKeySelector","title":" ArtifactoryArtifact"},{"location":"executor_swagger/#artifacts","text":"[] Artifact","title":" Artifacts"},{"location":"executor_swagger/#azureartifact","text":"AzureArtifact is the location of a an Azure Storage artifact Properties Name Type Go type Required Default Description Example accountKeySecret SecretKeySelector SecretKeySelector blob string string Blob is the blob name (i.e., path) in the container where the artifact resides container string string Container is the container where resources will be stored endpoint string string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":" AzureArtifact"},{"location":"executor_swagger/#azuredatadiskcachingmode","text":"+enum Name Type Go type Default Description Example AzureDataDiskCachingMode string string +enum","title":" AzureDataDiskCachingMode"},{"location":"executor_swagger/#azuredatadiskkind","text":"+enum Name Type Go type Default Description Example AzureDataDiskKind string string +enum","title":" AzureDataDiskKind"},{"location":"executor_swagger/#azurediskvolumesource","text":"Properties Name Type Go type Required Default Description Example cachingMode AzureDataDiskCachingMode AzureDataDiskCachingMode diskName string string diskName is the Name of the data disk in the blob storage diskURI string string diskURI is the URI of data disk in the blob storage fsType string string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional kind AzureDataDiskKind AzureDataDiskKind readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional","title":" AzureDiskVolumeSource"},{"location":"executor_swagger/#azurefilevolumesource","text":"Properties Name Type Go type Required Default Description Example readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretName string string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string string shareName is the azure share Name","title":" AzureFileVolumeSource"},{"location":"executor_swagger/#backoff","text":"Backoff is a backoff strategy to use within retryStrategy Properties Name Type Go type Required Default Description Example duration string string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString IntOrString maxDuration string string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy","title":" Backoff"},{"location":"executor_swagger/#basicauth","text":"BasicAuth describes the secret selectors required for basic authentication Properties Name Type Go type Required Default Description Example passwordSecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector","title":" BasicAuth"},{"location":"executor_swagger/#csivolumesource","text":"Represents a source location of a volume to mount, managed by an external CSI driver Properties Name Type Go type Required Default Description Example driver string string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string string fsType to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. +optional nodePublishSecretRef LocalObjectReference LocalObjectReference readOnly boolean bool readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). +optional volumeAttributes map of string map[string]string volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. +optional","title":" CSIVolumeSource"},{"location":"executor_swagger/#cache","text":"Cache is the configuration for the type of cache to be used Properties Name Type Go type Required Default Description Example configMap ConfigMapKeySelector ConfigMapKeySelector","title":" Cache"},{"location":"executor_swagger/#capabilities","text":"Properties Name Type Go type Required Default Description Example add [] Capability []Capability Added capabilities +optional drop [] Capability []Capability Removed capabilities +optional","title":" Capabilities"},{"location":"executor_swagger/#capability","text":"Capability represent POSIX capabilities type Name Type Go type Default Description Example Capability string string Capability represent POSIX capabilities type","title":" Capability"},{"location":"executor_swagger/#cephfsvolumesource","text":"Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example monitors []string []string monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretFile string string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +optional","title":" CephFSVolumeSource"},{"location":"executor_swagger/#cindervolumesource","text":"A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md +optional secretRef LocalObjectReference LocalObjectReference volumeID string string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md","title":" CinderVolumeSource"},{"location":"executor_swagger/#clientcertauth","text":"ClientCertAuth holds necessary information for client authentication via certificates Properties Name Type Go type Required Default Description Example clientCertSecret SecretKeySelector SecretKeySelector clientKeySecret SecretKeySelector SecretKeySelector","title":" ClientCertAuth"},{"location":"executor_swagger/#configmapenvsource","text":"The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap must be defined +optional","title":" ConfigMapEnvSource"},{"location":"executor_swagger/#configmapkeyselector","text":"+structType=atomic Properties Name Type Go type Required Default Description Example key string string The key to select. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the ConfigMap or its key must be defined +optional","title":" ConfigMapKeySelector"},{"location":"executor_swagger/#configmapprojection","text":"The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional","title":" ConfigMapProjection"},{"location":"executor_swagger/#configmapvolumesource","text":"The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional specify whether the ConfigMap or its keys must be defined +optional","title":" ConfigMapVolumeSource"},{"location":"executor_swagger/#container","text":"Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" Container"},{"location":"executor_swagger/#containernode","text":"Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional dependencies []string []string env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" ContainerNode"},{"location":"executor_swagger/#containerport","text":"Properties Name Type Go type Required Default Description Example containerPort int32 (formatted integer) int32 Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string string What host IP to bind the external port to. +optional hostPort int32 (formatted integer) int32 Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. +optional name string string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. +optional protocol Protocol Protocol","title":" ContainerPort"},{"location":"executor_swagger/#containersetretrystrategy","text":"Properties Name Type Go type Required Default Description Example duration string string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString IntOrString","title":" ContainerSetRetryStrategy"},{"location":"executor_swagger/#containersettemplate","text":"Properties Name Type Go type Required Default Description Example containers [] ContainerNode []*ContainerNode retryStrategy ContainerSetRetryStrategy ContainerSetRetryStrategy volumeMounts [] VolumeMount []*VolumeMount","title":" ContainerSetTemplate"},{"location":"executor_swagger/#continueon","text":"It can be specified if the workflow should continue when the pod errors, fails or both. Properties Name Type Go type Required Default Description Example error boolean bool +optional failed boolean bool +optional","title":" ContinueOn"},{"location":"executor_swagger/#counter","text":"Counter is a Counter prometheus metric Properties Name Type Go type Required Default Description Example value string string Value is the value of the metric","title":" Counter"},{"location":"executor_swagger/#creates3bucketoptions","text":"CreateS3BucketOptions options used to determine automatic automatic bucket-creation process Properties Name Type Go type Required Default Description Example objectLocking boolean bool ObjectLocking Enable object locking","title":" CreateS3BucketOptions"},{"location":"executor_swagger/#dagtask","text":"DAGTask represents a node in the graph during DAG execution Properties Name Type Go type Required Default Description Example arguments Arguments Arguments continueOn ContinueOn ContinueOn dependencies []string []string Dependencies are name of other targets which this depends on depends string string Depends are name of other targets which this depends on hooks LifecycleHooks LifecycleHooks inline Template Template name string string Name is the name of the target onExit string string OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template. DEPRECATED: Use Hooks[exit].Template instead. template string string Name of template to execute templateRef TemplateRef TemplateRef when string string When is an expression in which the task should conditionally execute withItems [] Item []Item WithItems expands a task into multiple parallel tasks from the items in the list withParam string string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence Sequence","title":" DAGTask"},{"location":"executor_swagger/#dagtemplate","text":"DAGTemplate is a template subtype for directed acyclic graph templates Properties Name Type Go type Required Default Description Example failFast boolean bool This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string string Target are one or more names of targets to execute in a DAG tasks [] DAGTask []*DAGTask Tasks are a list of DAG tasks +patchStrategy=merge +patchMergeKey=name","title":" DAGTemplate"},{"location":"executor_swagger/#data","text":"Data is a data template Properties Name Type Go type Required Default Description Example source DataSource DataSource transformation Transformation Transformation","title":" Data"},{"location":"executor_swagger/#datasource","text":"DataSource sources external data into a data template Properties Name Type Go type Required Default Description Example artifactPaths ArtifactPaths ArtifactPaths","title":" DataSource"},{"location":"executor_swagger/#downwardapiprojection","text":"Note that this is identical to a downwardAPI volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of DownwardAPIVolume file +optional","title":" DownwardAPIProjection"},{"location":"executor_swagger/#downwardapivolumefile","text":"DownwardAPIVolumeFile represents information to create the file containing the pod field Properties Name Type Go type Required Default Description Example fieldRef ObjectFieldSelector ObjectFieldSelector mode int32 (formatted integer) int32 Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector ResourceFieldSelector","title":" DownwardAPIVolumeFile"},{"location":"executor_swagger/#downwardapivolumesource","text":"Downward API volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] DownwardAPIVolumeFile []*DownwardAPIVolumeFile Items is a list of downward API volume file +optional","title":" DownwardAPIVolumeSource"},{"location":"executor_swagger/#duration","text":"Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. interface{}","title":" Duration"},{"location":"executor_swagger/#emptydirvolumesource","text":"Empty directory volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example medium StorageMedium StorageMedium sizeLimit Quantity Quantity","title":" EmptyDirVolumeSource"},{"location":"executor_swagger/#envfromsource","text":"EnvFromSource represents the source of a set of ConfigMaps Properties Name Type Go type Required Default Description Example configMapRef ConfigMapEnvSource ConfigMapEnvSource prefix string string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. +optional secretRef SecretEnvSource SecretEnvSource","title":" EnvFromSource"},{"location":"executor_swagger/#envvar","text":"Properties Name Type Go type Required Default Description Example name string string Name of the environment variable. Must be a C_IDENTIFIER. value string string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". +optional valueFrom EnvVarSource EnvVarSource","title":" EnvVar"},{"location":"executor_swagger/#envvarsource","text":"Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector fieldRef ObjectFieldSelector ObjectFieldSelector resourceFieldRef ResourceFieldSelector ResourceFieldSelector secretKeyRef SecretKeySelector SecretKeySelector","title":" EnvVarSource"},{"location":"executor_swagger/#ephemeralvolumesource","text":"Properties Name Type Go type Required Default Description Example volumeClaimTemplate PersistentVolumeClaimTemplate PersistentVolumeClaimTemplate","title":" EphemeralVolumeSource"},{"location":"executor_swagger/#execaction","text":"Properties Name Type Go type Required Default Description Example command []string []string Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions (' ', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. +optional","title":" ExecAction"},{"location":"executor_swagger/#executetemplateargs","text":"Properties Name Type Go type Required Default Description Example template Template Template \u2713 workflow Workflow Workflow \u2713","title":" ExecuteTemplateArgs"},{"location":"executor_swagger/#executetemplatereply","text":"Properties Name Type Go type Required Default Description Example node NodeResult NodeResult requeue Duration Duration","title":" ExecuteTemplateReply"},{"location":"executor_swagger/#executorconfig","text":"Properties Name Type Go type Required Default Description Example serviceAccountName string string ServiceAccountName specifies the service account name of the executor container.","title":" ExecutorConfig"},{"location":"executor_swagger/#fcvolumesource","text":"Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine +optional lun int32 (formatted integer) int32 lun is Optional: FC target lun number +optional readOnly boolean bool readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional targetWWNs []string []string targetWWNs is Optional: FC target worldwide names (WWNs) +optional wwids []string []string wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. +optional","title":" FCVolumeSource"},{"location":"executor_swagger/#fieldsv1","text":"Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff +protobuf.options.(gogoproto.goproto_stringer)=false interface{}","title":" FieldsV1"},{"location":"executor_swagger/#flexvolumesource","text":"FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Properties Name Type Go type Required Default Description Example driver string string driver is the name of the driver to use for this volume. fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. +optional options map of string map[string]string options is Optional: this field holds extra command options if any. +optional readOnly boolean bool readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference","title":" FlexVolumeSource"},{"location":"executor_swagger/#flockervolumesource","text":"One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example datasetName string string datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated +optional datasetUUID string string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset +optional","title":" FlockerVolumeSource"},{"location":"executor_swagger/#gcepersistentdiskvolumesource","text":"A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine +optional partition int32 (formatted integer) int32 partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional pdName string string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +optional","title":" GCEPersistentDiskVolumeSource"},{"location":"executor_swagger/#gcsartifact","text":"GCSArtifact is the location of a GCS artifact Properties Name Type Go type Required Default Description Example bucket string string Bucket is the name of the bucket key string string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector SecretKeySelector","title":" GCSArtifact"},{"location":"executor_swagger/#grpcaction","text":"Properties Name Type Go type Required Default Description Example port int32 (formatted integer) int32 Port number of the gRPC service. Number must be in the range 1 to 65535. service string string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC. +optional +default=\"\" | |","title":" GRPCAction"},{"location":"executor_swagger/#gauge","text":"Gauge is a Gauge prometheus metric Properties Name Type Go type Required Default Description Example operation GaugeOperation GaugeOperation realtime boolean bool Realtime emits this metric in real time if applicable value string string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric","title":" Gauge"},{"location":"executor_swagger/#gaugeoperation","text":"Name Type Go type Default Description Example GaugeOperation string string","title":" GaugeOperation"},{"location":"executor_swagger/#gitartifact","text":"GitArtifact is the location of an git artifact Properties Name Type Go type Required Default Description Example branch string string Branch is the branch to fetch when SingleBranch is enabled depth uint64 (formatted integer) uint64 Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean bool DisableSubmodules disables submodules during git clone fetch []string []string Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean bool InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector SecretKeySelector repo string string Repo is the git repository revision string string Revision is the git commit, tag, branch to checkout singleBranch boolean bool SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SecretKeySelector usernameSecret SecretKeySelector SecretKeySelector","title":" GitArtifact"},{"location":"executor_swagger/#gitrepovolumesource","text":"DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Properties Name Type Go type Required Default Description Example directory string string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. +optional repository string string repository is the URL revision string string revision is the commit hash for the specified revision. +optional","title":" GitRepoVolumeSource"},{"location":"executor_swagger/#glusterfsvolumesource","text":"Glusterfs volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example endpoints string string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean bool readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod +optional","title":" GlusterfsVolumeSource"},{"location":"executor_swagger/#hdfsartifact","text":"HDFSArtifact is the location of an HDFS artifact Properties Name Type Go type Required Default Description Example addresses []string []string Addresses is accessible addresses of HDFS name nodes force boolean bool Force copies a file forcibly even if it exists hdfsUser string string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector SecretKeySelector krbConfigConfigMap ConfigMapKeySelector ConfigMapKeySelector krbKeytabSecret SecretKeySelector SecretKeySelector krbRealm string string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string string Path is a file path in HDFS","title":" HDFSArtifact"},{"location":"executor_swagger/#http","text":"Properties Name Type Go type Required Default Description Example body string string Body is content of the HTTP Request bodyFrom HTTPBodySource HTTPBodySource headers HTTPHeaders HTTPHeaders insecureSkipVerify boolean bool InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string string Method is HTTP methods for HTTP Request successCondition string string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds int64 (formatted integer) int64 TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string string URL of the HTTP Request","title":" HTTP"},{"location":"executor_swagger/#httpartifact","text":"HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Properties Name Type Go type Required Default Description Example auth HTTPAuth HTTPAuth headers [] Header []*Header Headers are an optional list of headers to send with HTTP requests for artifacts url string string URL of the artifact","title":" HTTPArtifact"},{"location":"executor_swagger/#httpauth","text":"Properties Name Type Go type Required Default Description Example basicAuth BasicAuth BasicAuth clientCert ClientCertAuth ClientCertAuth oauth2 OAuth2Auth OAuth2Auth","title":" HTTPAuth"},{"location":"executor_swagger/#httpbodysource","text":"Properties Name Type Go type Required Default Description Example bytes []uint8 (formatted integer) []uint8","title":" HTTPBodySource"},{"location":"executor_swagger/#httpgetaction","text":"Properties Name Type Go type Required Default Description Example host string string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. +optional httpHeaders [] HTTPHeader []*HTTPHeader Custom headers to set in the request. HTTP allows repeated headers. +optional path string string Path to access on the HTTP server. +optional port IntOrString IntOrString scheme URIScheme URIScheme","title":" HTTPGetAction"},{"location":"executor_swagger/#httpheader","text":"Properties Name Type Go type Required Default Description Example name string string value string string valueFrom HTTPHeaderSource HTTPHeaderSource","title":" HTTPHeader"},{"location":"executor_swagger/#httpheadersource","text":"Properties Name Type Go type Required Default Description Example secretKeyRef SecretKeySelector SecretKeySelector","title":" HTTPHeaderSource"},{"location":"executor_swagger/#httpheaders","text":"[] HTTPHeader","title":" HTTPHeaders"},{"location":"executor_swagger/#header","text":"Header indicate a key-value request header to be used when fetching artifacts over HTTP Properties Name Type Go type Required Default Description Example name string string Name is the header name value string string Value is the literal value to use for the header","title":" Header"},{"location":"executor_swagger/#histogram","text":"Histogram is a Histogram prometheus metric Properties Name Type Go type Required Default Description Example buckets [] Amount []Amount Buckets is a list of bucket divisors for the histogram value string string Value is the value of the metric","title":" Histogram"},{"location":"executor_swagger/#hostalias","text":"HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Properties Name Type Go type Required Default Description Example hostnames []string []string Hostnames for the above IP address. ip string string IP address of the host file entry.","title":" HostAlias"},{"location":"executor_swagger/#hostpathtype","text":"+enum Name Type Go type Default Description Example HostPathType string string +enum","title":" HostPathType"},{"location":"executor_swagger/#hostpathvolumesource","text":"Host path volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type HostPathType HostPathType","title":" HostPathVolumeSource"},{"location":"executor_swagger/#iscsivolumesource","text":"ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example chapAuthDiscovery boolean bool chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication +optional chapAuthSession boolean bool chapAuthSession defines whether support iSCSI Session CHAP authentication +optional fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine +optional initiatorName string string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. +optional iqn string string iqn is the target iSCSI Qualified Name. iscsiInterface string string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). +optional lun int32 (formatted integer) int32 lun represents iSCSI Target Lun number. portals []string []string portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. +optional secretRef LocalObjectReference LocalObjectReference targetPortal string string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).","title":" ISCSIVolumeSource"},{"location":"executor_swagger/#inputs","text":"Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts parameters [] Parameter []*Parameter Parameters are a list of parameters passed as inputs +patchStrategy=merge +patchMergeKey=name","title":" Inputs"},{"location":"executor_swagger/#intorstring","text":"+protobuf=true +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:openapi-gen=true Properties Name Type Go type Required Default Description Example IntVal int32 (formatted integer) int32 StrVal string string Type Type Type","title":" IntOrString"},{"location":"executor_swagger/#item","text":"+protobuf.options.(gogoproto.goproto_stringer)=false +kubebuilder:validation:Type=object interface{}","title":" Item"},{"location":"executor_swagger/#keytopath","text":"Properties Name Type Go type Required Default Description Example key string string key is the key to project. mode int32 (formatted integer) int32 mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional path string string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.","title":" KeyToPath"},{"location":"executor_swagger/#labelselector","text":"A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] LabelSelectorRequirement []*LabelSelectorRequirement matchExpressions is a list of label selector requirements. The requirements are ANDed. +optional matchLabels map of string map[string]string matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed. +optional","title":" LabelSelector"},{"location":"executor_swagger/#labelselectoroperator","text":"Name Type Go type Default Description Example LabelSelectorOperator string string","title":" LabelSelectorOperator"},{"location":"executor_swagger/#labelselectorrequirement","text":"A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string key is the label key that the selector applies to. +patchMergeKey=key +patchStrategy=merge operator LabelSelectorOperator LabelSelectorOperator values []string []string values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +optional","title":" LabelSelectorRequirement"},{"location":"executor_swagger/#lifecycle","text":"Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Properties Name Type Go type Required Default Description Example postStart LifecycleHandler LifecycleHandler preStop LifecycleHandler LifecycleHandler","title":" Lifecycle"},{"location":"executor_swagger/#lifecyclehandler","text":"LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction httpGet HTTPGetAction HTTPGetAction tcpSocket TCPSocketAction TCPSocketAction","title":" LifecycleHandler"},{"location":"executor_swagger/#lifecyclehook","text":"Properties Name Type Go type Required Default Description Example arguments Arguments Arguments expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef","title":" LifecycleHook"},{"location":"executor_swagger/#lifecyclehooks","text":"LifecycleHooks","title":" LifecycleHooks"},{"location":"executor_swagger/#localobjectreference","text":"LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional","title":" LocalObjectReference"},{"location":"executor_swagger/#managedfieldsentry","text":"ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Properties Name Type Go type Required Default Description Example apiVersion string string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 manager string string Manager is an identifier of the workflow managing these fields. operation ManagedFieldsOperationType ManagedFieldsOperationType subresource string string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time","title":" ManagedFieldsEntry"},{"location":"executor_swagger/#managedfieldsoperationtype","text":"Name Type Go type Default Description Example ManagedFieldsOperationType string string","title":" ManagedFieldsOperationType"},{"location":"executor_swagger/#manifestfrom","text":"Properties Name Type Go type Required Default Description Example artifact Artifact Artifact","title":" ManifestFrom"},{"location":"executor_swagger/#memoize","text":"Memoization enables caching for the Outputs of the template Properties Name Type Go type Required Default Description Example cache Cache Cache key string string Key is the key to use as the caching key maxAge string string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored.","title":" Memoize"},{"location":"executor_swagger/#metadata","text":"Pod metdata Properties Name Type Go type Required Default Description Example annotations map of string map[string]string labels map of string map[string]string","title":" Metadata"},{"location":"executor_swagger/#metriclabel","text":"MetricLabel is a single label for a prometheus metric Properties Name Type Go type Required Default Description Example key string string value string string","title":" MetricLabel"},{"location":"executor_swagger/#metrics","text":"Metrics are a list of metrics emitted from a Workflow/Template Properties Name Type Go type Required Default Description Example prometheus [] Prometheus []*Prometheus Prometheus is a list of prometheus metrics to be emitted","title":" Metrics"},{"location":"executor_swagger/#mountpropagationmode","text":"+enum Name Type Go type Default Description Example MountPropagationMode string string +enum","title":" MountPropagationMode"},{"location":"executor_swagger/#mutex","text":"Mutex holds Mutex configuration Properties Name Type Go type Required Default Description Example name string string name of the mutex namespace string string \"[namespace of workflow]\"","title":" Mutex"},{"location":"executor_swagger/#nfsvolumesource","text":"NFS volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example path string string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean bool readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs +optional server string string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs","title":" NFSVolumeSource"},{"location":"executor_swagger/#nodeaffinity","text":"Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] PreferredSchedulingTerm []*PreferredSchedulingTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution NodeSelector NodeSelector","title":" NodeAffinity"},{"location":"executor_swagger/#nodephase","text":"Name Type Go type Default Description Example NodePhase string string","title":" NodePhase"},{"location":"executor_swagger/#noderesult","text":"Properties Name Type Go type Required Default Description Example message string string outputs Outputs Outputs phase NodePhase NodePhase progress Progress Progress","title":" NodeResult"},{"location":"executor_swagger/#nodeselector","text":"A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. +structType=atomic Properties Name Type Go type Required Default Description Example nodeSelectorTerms [] NodeSelectorTerm []*NodeSelectorTerm Required. A list of node selector terms. The terms are ORed.","title":" NodeSelector"},{"location":"executor_swagger/#nodeselectoroperator","text":"A node selector operator is the set of operators that can be used in a node selector requirement. +enum Name Type Go type Default Description Example NodeSelectorOperator string string A node selector operator is the set of operators that can be used in a node selector requirement. +enum","title":" NodeSelectorOperator"},{"location":"executor_swagger/#nodeselectorrequirement","text":"A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Properties Name Type Go type Required Default Description Example key string string The label key that the selector applies to. operator NodeSelectorOperator NodeSelectorOperator values []string []string An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +optional","title":" NodeSelectorRequirement"},{"location":"executor_swagger/#nodeselectorterm","text":"A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +structType=atomic Properties Name Type Go type Required Default Description Example matchExpressions [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's labels. +optional matchFields [] NodeSelectorRequirement []*NodeSelectorRequirement A list of node selector requirements by node's fields. +optional","title":" NodeSelectorTerm"},{"location":"executor_swagger/#nonestrategy","text":"NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. interface{}","title":" NoneStrategy"},{"location":"executor_swagger/#oauth2auth","text":"OAuth2Auth holds all information for client authentication via OAuth2 tokens Properties Name Type Go type Required Default Description Example clientIDSecret SecretKeySelector SecretKeySelector clientSecretSecret SecretKeySelector SecretKeySelector endpointParams [] OAuth2EndpointParam []*OAuth2EndpointParam scopes []string []string tokenURLSecret SecretKeySelector SecretKeySelector","title":" OAuth2Auth"},{"location":"executor_swagger/#oauth2endpointparam","text":"EndpointParam is for requesting optional fields that should be sent in the oauth request Properties Name Type Go type Required Default Description Example key string string Name is the header name value string string Value is the literal value to use for the header","title":" OAuth2EndpointParam"},{"location":"executor_swagger/#ossartifact","text":"OSSArtifact is the location of an Alibaba Cloud OSS artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket createBucketIfNotPresent boolean bool CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string string Endpoint is the hostname of the bucket endpoint key string string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule OSSLifecycleRule secretKeySecret SecretKeySelector SecretKeySelector securityToken string string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":" OSSArtifact"},{"location":"executor_swagger/#osslifecyclerule","text":"OSSLifecycleRule specifies how to manage bucket's lifecycle Properties Name Type Go type Required Default Description Example markDeletionAfterDays int32 (formatted integer) int32 MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays int32 (formatted integer) int32 MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type","title":" OSSLifecycleRule"},{"location":"executor_swagger/#objectfieldselector","text":"+structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". +optional fieldPath string string Path of the field to select in the specified API version.","title":" ObjectFieldSelector"},{"location":"executor_swagger/#objectmeta","text":"Properties Name Type Go type Required Default Description Example name string string namespace string string uid string string","title":" ObjectMeta"},{"location":"executor_swagger/#outputs","text":"Outputs hold parameters, artifacts, and results from a step Properties Name Type Go type Required Default Description Example artifacts Artifacts Artifacts exitCode string string ExitCode holds the exit code of a script template parameters [] Parameter []*Parameter Parameters holds the list of output parameters produced by a step +patchStrategy=merge +patchMergeKey=name result string string Result holds the result (stdout) of a script template","title":" Outputs"},{"location":"executor_swagger/#ownerreference","text":"OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. +structType=atomic Properties Name Type Go type Required Default Description Example apiVersion string string API version of the referent. blockOwnerDeletion boolean bool If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. +optional controller boolean bool If true, this reference points to the managing controller. +optional kind string string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid UID UID","title":" OwnerReference"},{"location":"executor_swagger/#parallelsteps","text":"+kubebuilder:validation:Type=array interface{}","title":" ParallelSteps"},{"location":"executor_swagger/#parameter","text":"Parameter indicate a passed string parameter to a service template with an optional default value Properties Name Type Go type Required Default Description Example default AnyString AnyString description AnyString AnyString enum [] AnyString []AnyString Enum holds a list of string values to choose from, for the actual value of the parameter globalName string string GlobalName exports an output parameter to the global scope, making it available as '{{workflow.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string string Name is the parameter name value AnyString AnyString valueFrom ValueFrom ValueFrom","title":" Parameter"},{"location":"executor_swagger/#persistentvolumeaccessmode","text":"+enum Name Type Go type Default Description Example PersistentVolumeAccessMode string string +enum","title":" PersistentVolumeAccessMode"},{"location":"executor_swagger/#persistentvolumeclaimspec","text":"PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Properties Name Type Go type Required Default Description Example accessModes [] PersistentVolumeAccessMode []PersistentVolumeAccessMode accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 +optional dataSource TypedLocalObjectReference TypedLocalObjectReference dataSourceRef TypedLocalObjectReference TypedLocalObjectReference resources ResourceRequirements ResourceRequirements selector LabelSelector LabelSelector storageClassName string string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 +optional volumeMode PersistentVolumeMode PersistentVolumeMode volumeName string string volumeName is the binding reference to the PersistentVolume backing this claim. +optional","title":" PersistentVolumeClaimSpec"},{"location":"executor_swagger/#persistentvolumeclaimtemplate","text":"PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Properties Name Type Go type Required Default Description Example annotations map of string map[string]string Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations +optional clusterName string string Deprecated: ClusterName is a legacy field that was always cleared by the system and never used; it will be removed completely in 1.25. The name in the go struct is changed to help clients detect accidental use. +optional | | | creationTimestamp | Time | Time | | | | | | deletionGracePeriodSeconds | int64 (formatted integer)| int64 | | | Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. +optional | | | deletionTimestamp | Time | Time | | | | | | finalizers | []string| []string | | | Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. +optional +patchStrategy=merge | | | generateName | string| string | | | GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency +optional | | | generation | int64 (formatted integer)| int64 | | | A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. +optional | | | labels | map of string| map[string]string | | | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels +optional | | | managedFields | [] ManagedFieldsEntry | []*ManagedFieldsEntry | | | ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. +optional | | | name | string| string | | | Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names +optional | | | namespace | string| string | | | Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces +optional | | | ownerReferences | [] OwnerReference | []*OwnerReference | | | List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. +optional +patchMergeKey=uid +patchStrategy=merge | | | resourceVersion | string| string | | | An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency +optional | | | selfLink | string| string | | | Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. +optional | | | spec | PersistentVolumeClaimSpec | PersistentVolumeClaimSpec | | | | | | uid | UID | UID | | | | |","title":" PersistentVolumeClaimTemplate"},{"location":"executor_swagger/#persistentvolumeclaimvolumesource","text":"This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Properties Name Type Go type Required Default Description Example claimName string string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean bool readOnly Will force the ReadOnly setting in VolumeMounts. Default false. +optional","title":" PersistentVolumeClaimVolumeSource"},{"location":"executor_swagger/#persistentvolumemode","text":"+enum Name Type Go type Default Description Example PersistentVolumeMode string string +enum","title":" PersistentVolumeMode"},{"location":"executor_swagger/#photonpersistentdiskvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string string pdID is the ID that identifies Photon Controller persistent disk","title":" PhotonPersistentDiskVolumeSource"},{"location":"executor_swagger/#plugin","text":"Plugin is an Object with exactly one key interface{}","title":" Plugin"},{"location":"executor_swagger/#podaffinity","text":"Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional","title":" PodAffinity"},{"location":"executor_swagger/#podaffinityterm","text":"Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running Properties Name Type Go type Required Default Description Example labelSelector LabelSelector LabelSelector namespaceSelector LabelSelector LabelSelector namespaces []string []string namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\". +optional topologyKey string string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.","title":" PodAffinityTerm"},{"location":"executor_swagger/#podantiaffinity","text":"Properties Name Type Go type Required Default Description Example preferredDuringSchedulingIgnoredDuringExecution [] WeightedPodAffinityTerm []*WeightedPodAffinityTerm The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +optional requiredDuringSchedulingIgnoredDuringExecution [] PodAffinityTerm []*PodAffinityTerm If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +optional","title":" PodAntiAffinity"},{"location":"executor_swagger/#podfsgroupchangepolicy","text":"PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum Name Type Go type Default Description Example PodFSGroupChangePolicy string string PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume when volume is mounted. +enum","title":" PodFSGroupChangePolicy"},{"location":"executor_swagger/#podsecuritycontext","text":"Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Properties Name Type Go type Required Default Description Example fsGroup int64 (formatted integer) int64 A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: The owning GID will be the FSGroup The setgid bit is set (new files created in the volume will be owned by FSGroup) The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. +optional | | | fsGroupChangePolicy | PodFSGroupChangePolicy | PodFSGroupChangePolicy | | | | | | runAsGroup | int64 (formatted integer)| int64 | | | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | runAsNonRoot | boolean| bool | | | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional | | | runAsUser | int64 (formatted integer)| int64 | | | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. +optional | | | seLinuxOptions | SELinuxOptions | SELinuxOptions | | | | | | seccompProfile | SeccompProfile | SeccompProfile | | | | | | supplementalGroups | []int64 (formatted integer)| []int64 | | | A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. +optional | | | sysctls | [] Sysctl | []*Sysctl | | | Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. +optional | | | windowsOptions | WindowsSecurityContextOptions | WindowsSecurityContextOptions | | | | |","title":" PodSecurityContext"},{"location":"executor_swagger/#portworxvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional volumeID string string volumeID uniquely identifies a Portworx volume","title":" PortworxVolumeSource"},{"location":"executor_swagger/#preferredschedulingterm","text":"An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Properties Name Type Go type Required Default Description Example preference NodeSelectorTerm NodeSelectorTerm weight int32 (formatted integer) int32 Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.","title":" PreferredSchedulingTerm"},{"location":"executor_swagger/#probe","text":"Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Properties Name Type Go type Required Default Description Example exec ExecAction ExecAction failureThreshold int32 (formatted integer) int32 Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. +optional grpc GRPCAction GRPCAction httpGet HTTPGetAction HTTPGetAction initialDelaySeconds int32 (formatted integer) int32 Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional periodSeconds int32 (formatted integer) int32 How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. +optional successThreshold int32 (formatted integer) int32 Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. +optional tcpSocket TCPSocketAction TCPSocketAction terminationGracePeriodSeconds int64 (formatted integer) int64 Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. +optional timeoutSeconds int32 (formatted integer) int32 Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes +optional","title":" Probe"},{"location":"executor_swagger/#procmounttype","text":"+enum Name Type Go type Default Description Example ProcMountType string string +enum","title":" ProcMountType"},{"location":"executor_swagger/#progress","text":"Name Type Go type Default Description Example Progress string string","title":" Progress"},{"location":"executor_swagger/#projectedvolumesource","text":"Represents a projected volume source Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional sources [] VolumeProjection []*VolumeProjection sources is the list of volume projections +optional","title":" ProjectedVolumeSource"},{"location":"executor_swagger/#prometheus","text":"Prometheus is a prometheus metric to be emitted Properties Name Type Go type Required Default Description Example counter Counter Counter gauge Gauge Gauge help string string Help is a string that describes the metric histogram Histogram Histogram labels [] MetricLabel []*MetricLabel Labels is a list of metric labels name string string Name is the name of the metric when string string When is a conditional statement that decides when to emit the metric","title":" Prometheus"},{"location":"executor_swagger/#protocol","text":"+enum Name Type Go type Default Description Example Protocol string string +enum","title":" Protocol"},{"location":"executor_swagger/#pullpolicy","text":"PullPolicy describes a policy for if/when to pull a container image +enum Name Type Go type Default Description Example PullPolicy string string PullPolicy describes a policy for if/when to pull a container image +enum","title":" PullPolicy"},{"location":"executor_swagger/#quantity","text":"The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. +protobuf=true +protobuf.embed=string +protobuf.options.marshal=false +protobuf.options.(gogoproto.goproto_stringer)=false +k8s:deepcopy-gen=true +k8s:openapi-gen=true interface{}","title":" Quantity"},{"location":"executor_swagger/#quobytevolumesource","text":"Quobyte volumes do not support ownership management or SELinux relabeling. Properties Name Type Go type Required Default Description Example group string string group to map volume access to Default is no group +optional readOnly boolean bool readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. +optional registry string string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin +optional user string string user to map volume access to Defaults to serivceaccount user +optional volume string string volume is a string that references an already created Quobyte volume by name.","title":" QuobyteVolumeSource"},{"location":"executor_swagger/#rbdvolumesource","text":"RBD volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine +optional image string string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional monitors []string []string monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional readOnly boolean bool readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional secretRef LocalObjectReference LocalObjectReference user string string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +optional","title":" RBDVolumeSource"},{"location":"executor_swagger/#rawartifact","text":"RawArtifact allows raw string content to be placed as an artifact in a container Properties Name Type Go type Required Default Description Example data string string Data is the string contents of the artifact","title":" RawArtifact"},{"location":"executor_swagger/#resourcefieldselector","text":"ResourceFieldSelector represents container resources (cpu, memory) and their output format +structType=atomic Properties Name Type Go type Required Default Description Example containerName string string Container name: required for volumes, optional for env vars +optional divisor Quantity Quantity resource string string Required: resource to select","title":" ResourceFieldSelector"},{"location":"executor_swagger/#resourcelist","text":"ResourceList","title":" ResourceList"},{"location":"executor_swagger/#resourcerequirements","text":"Properties Name Type Go type Required Default Description Example limits ResourceList ResourceList requests ResourceList ResourceList","title":" ResourceRequirements"},{"location":"executor_swagger/#resourcetemplate","text":"ResourceTemplate is a template subtype to manipulate kubernetes resources Properties Name Type Go type Required Default Description Example action string string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags []string []string Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom mergeStrategy string string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean bool SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step","title":" ResourceTemplate"},{"location":"executor_swagger/#retryaffinity","text":"Properties Name Type Go type Required Default Description Example nodeAntiAffinity RetryNodeAntiAffinity RetryNodeAntiAffinity","title":" RetryAffinity"},{"location":"executor_swagger/#retrynodeantiaffinity","text":"In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\". interface{}","title":" RetryNodeAntiAffinity"},{"location":"executor_swagger/#retrypolicy","text":"Name Type Go type Default Description Example RetryPolicy string string","title":" RetryPolicy"},{"location":"executor_swagger/#retrystrategy","text":"RetryStrategy provides controls on how to retry a workflow step Properties Name Type Go type Required Default Description Example affinity RetryAffinity RetryAffinity backoff Backoff Backoff expression string string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString IntOrString retryPolicy RetryPolicy RetryPolicy","title":" RetryStrategy"},{"location":"executor_swagger/#s3artifact","text":"S3Artifact is the location of an S3 artifact Properties Name Type Go type Required Default Description Example accessKeySecret SecretKeySelector SecretKeySelector bucket string string Bucket is the name of the bucket caSecret SecretKeySelector SecretKeySelector createBucketIfNotPresent CreateS3BucketOptions CreateS3BucketOptions encryptionOptions S3EncryptionOptions S3EncryptionOptions endpoint string string Endpoint is the hostname of the bucket endpoint insecure boolean bool Insecure will connect to the service with TLS key string string Key is the key in the bucket where the artifact resides region string string Region contains the optional bucket region roleARN string string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySelector useSDKCreds boolean bool UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":" S3Artifact"},{"location":"executor_swagger/#s3encryptionoptions","text":"S3EncryptionOptions used to determine encryption options during s3 operations Properties Name Type Go type Required Default Description Example enableEncryption boolean bool EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector SecretKeySelector","title":" S3EncryptionOptions"},{"location":"executor_swagger/#selinuxoptions","text":"SELinuxOptions are the labels to be applied to the container Properties Name Type Go type Required Default Description Example level string string Level is SELinux level label that applies to the container. +optional role string string Role is a SELinux role label that applies to the container. +optional type string string Type is a SELinux type label that applies to the container. +optional user string string User is a SELinux user label that applies to the container. +optional","title":" SELinuxOptions"},{"location":"executor_swagger/#scaleiovolumesource","text":"ScaleIOVolumeSource represents a persistent ScaleIO volume Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". +optional gateway string string gateway is the host address of the ScaleIO API Gateway. protectionDomain string string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. +optional readOnly boolean bool readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference sslEnabled boolean bool sslEnabled Flag enable/disable SSL communication with Gateway, default false +optional storageMode string string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. +optional storagePool string string storagePool is the ScaleIO Storage Pool associated with the protection domain. +optional system string string system is the name of the storage system as configured in ScaleIO. volumeName string string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.","title":" ScaleIOVolumeSource"},{"location":"executor_swagger/#scripttemplate","text":"ScriptTemplate is a template subtype to enable scripting through code steps Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext source string string Source contains the source code of the script to execute startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" ScriptTemplate"},{"location":"executor_swagger/#seccompprofile","text":"Only one profile source may be set. +union Properties Name Type Go type Required Default Description Example localhostProfile string string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". +optional type SeccompProfileType SeccompProfileType","title":" SeccompProfile"},{"location":"executor_swagger/#seccompprofiletype","text":"+enum Name Type Go type Default Description Example SeccompProfileType string string +enum","title":" SeccompProfileType"},{"location":"executor_swagger/#secretenvsource","text":"The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Properties Name Type Go type Required Default Description Example name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret must be defined +optional","title":" SecretEnvSource"},{"location":"executor_swagger/#secretkeyselector","text":"+structType=atomic Properties Name Type Go type Required Default Description Example key string string The key of the secret to select from. Must be a valid secret key. name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool Specify whether the Secret or its key must be defined +optional","title":" SecretKeySelector"},{"location":"executor_swagger/#secretprojection","text":"The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Properties Name Type Go type Required Default Description Example items [] KeyToPath []*KeyToPath items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional name string string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? +optional optional boolean bool optional field specify whether the Secret or its key must be defined +optional","title":" SecretProjection"},{"location":"executor_swagger/#secretvolumesource","text":"The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Properties Name Type Go type Required Default Description Example defaultMode int32 (formatted integer) int32 defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. +optional items [] KeyToPath []*KeyToPath items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. +optional optional boolean bool optional field specify whether the Secret or its keys must be defined +optional secretName string string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret +optional","title":" SecretVolumeSource"},{"location":"executor_swagger/#securitycontext","text":"Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Properties Name Type Go type Required Default Description Example allowPrivilegeEscalation boolean bool AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. +optional capabilities Capabilities Capabilities privileged boolean bool Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. +optional procMount ProcMountType ProcMountType readOnlyRootFilesystem boolean bool Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. +optional runAsGroup int64 (formatted integer) int64 The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional runAsNonRoot boolean bool Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional runAsUser int64 (formatted integer) int64 The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. +optional seLinuxOptions SELinuxOptions SELinuxOptions seccompProfile SeccompProfile SeccompProfile windowsOptions WindowsSecurityContextOptions WindowsSecurityContextOptions","title":" SecurityContext"},{"location":"executor_swagger/#semaphoreref","text":"SemaphoreRef is a reference of Semaphore Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector namespace string string \"[namespace of workflow]\"","title":" SemaphoreRef"},{"location":"executor_swagger/#sequence","text":"Sequence expands a workflow step into numeric range Properties Name Type Go type Required Default Description Example count IntOrString IntOrString end IntOrString IntOrString format string string Format is a printf format string to format the value in the sequence start IntOrString IntOrString","title":" Sequence"},{"location":"executor_swagger/#serviceaccounttokenprojection","text":"ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Properties Name Type Go type Required Default Description Example audience string string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. +optional expirationSeconds int64 (formatted integer) int64 expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. +optional path string string path is the path relative to the mount point of the file to project the token into.","title":" ServiceAccountTokenProjection"},{"location":"executor_swagger/#storagemedium","text":"Name Type Go type Default Description Example StorageMedium string string","title":" StorageMedium"},{"location":"executor_swagger/#storageosvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional readOnly boolean bool readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. +optional secretRef LocalObjectReference LocalObjectReference volumeName string string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. +optional","title":" StorageOSVolumeSource"},{"location":"executor_swagger/#suppliedvaluefrom","text":"interface{}","title":" SuppliedValueFrom"},{"location":"executor_swagger/#suspendtemplate","text":"SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Properties Name Type Go type Required Default Description Example duration string string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\"","title":" SuspendTemplate"},{"location":"executor_swagger/#synchronization","text":"Synchronization holds synchronization lock configuration Properties Name Type Go type Required Default Description Example mutex Mutex Mutex semaphore SemaphoreRef SemaphoreRef","title":" Synchronization"},{"location":"executor_swagger/#sysctl","text":"Sysctl defines a kernel parameter to be set Properties Name Type Go type Required Default Description Example name string string Name of a property to set value string string Value of a property to set","title":" Sysctl"},{"location":"executor_swagger/#tcpsocketaction","text":"TCPSocketAction describes an action based on opening a socket Properties Name Type Go type Required Default Description Example host string string Optional: Host name to connect to, defaults to the pod IP. +optional port IntOrString IntOrString","title":" TCPSocketAction"},{"location":"executor_swagger/#tainteffect","text":"+enum Name Type Go type Default Description Example TaintEffect string string +enum","title":" TaintEffect"},{"location":"executor_swagger/#tarstrategy","text":"TarStrategy will tar and gzip the file or directory when saving Properties Name Type Go type Required Default Description Example compressionLevel int32 (formatted integer) int32 CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression.","title":" TarStrategy"},{"location":"executor_swagger/#template","text":"Template is a reusable and composable unit of execution in a workflow Properties Name Type Go type Required Default Description Example activeDeadlineSeconds IntOrString IntOrString affinity Affinity Affinity archiveLocation ArtifactLocation ArtifactLocation automountServiceAccountToken boolean bool AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container containerSet ContainerSetTemplate ContainerSetTemplate daemon boolean bool Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAGTemplate data Data Data executor ExecutorConfig ExecutorConfig failFast boolean bool FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases [] HostAlias []*HostAlias HostAliases is an optional list of hosts and IPs that will be injected into the pod spec +patchStrategy=merge +patchMergeKey=ip http HTTP HTTP initContainers [] UserContainer []*UserContainer InitContainers is a list of containers which run before the main container. +patchStrategy=merge +patchMergeKey=name inputs Inputs Inputs memoize Memoize Memoize metadata Metadata Metadata metrics Metrics Metrics name string string Name is the name of the template nodeSelector map of string map[string]string NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs parallelism int64 (formatted integer) int64 Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin podSpecPatch string string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority int32 (formatted integer) int32 Priority to apply to workflow pods. priorityClassName string string PriorityClassName to apply to workflow pods. resource ResourceTemplate ResourceTemplate retryStrategy RetryStrategy RetryStrategy schedulerName string string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. +optional script ScriptTemplate ScriptTemplate securityContext PodSecurityContext PodSecurityContext serviceAccountName string string ServiceAccountName to apply to workflow pods sidecars [] UserContainer []*UserContainer Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes +patchStrategy=merge +patchMergeKey=name steps [] ParallelSteps []ParallelSteps Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate SuspendTemplate synchronization Synchronization Synchronization timeout string string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations [] Toleration []*Toleration Tolerations to apply to workflow pods. +patchStrategy=merge +patchMergeKey=key volumes [] Volume []*Volume Volumes is a list of volumes that can be mounted by containers in a template. +patchStrategy=merge +patchMergeKey=name","title":" Template"},{"location":"executor_swagger/#templateref","text":"Properties Name Type Go type Required Default Description Example clusterScope boolean bool ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string string Name is the resource name of the template. template string string Template is the name of referred template in the resource.","title":" TemplateRef"},{"location":"executor_swagger/#terminationmessagepolicy","text":"+enum Name Type Go type Default Description Example TerminationMessagePolicy string string +enum","title":" TerminationMessagePolicy"},{"location":"executor_swagger/#time","text":"+protobuf.options.marshal=false +protobuf.as=Timestamp +protobuf.options.(gogoproto.goproto_stringer)=false interface{}","title":" Time"},{"location":"executor_swagger/#toleration","text":"The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . Properties Name Type Go type Required Default Description Example effect TaintEffect TaintEffect key string string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. +optional operator TolerationOperator TolerationOperator tolerationSeconds int64 (formatted integer) int64 TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. +optional value string string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. +optional","title":" Toleration"},{"location":"executor_swagger/#tolerationoperator","text":"+enum Name Type Go type Default Description Example TolerationOperator string string +enum","title":" TolerationOperator"},{"location":"executor_swagger/#transformation","text":"[] TransformationStep","title":" Transformation"},{"location":"executor_swagger/#transformationstep","text":"Properties Name Type Go type Required Default Description Example expression string string Expression defines an expr expression to apply","title":" TransformationStep"},{"location":"executor_swagger/#type","text":"Name Type Go type Default Description Example Type int64 (formatted integer) int64","title":" Type"},{"location":"executor_swagger/#typedlocalobjectreference","text":"TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. +structType=atomic Properties Name Type Go type Required Default Description Example apiGroup string string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. +optional kind string string Kind is the type of resource being referenced name string string Name is the name of resource being referenced","title":" TypedLocalObjectReference"},{"location":"executor_swagger/#uid","text":"UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. Name Type Go type Default Description Example UID string string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated.","title":" UID"},{"location":"executor_swagger/#urischeme","text":"URIScheme identifies the scheme used for connection to a host for Get actions +enum Name Type Go type Default Description Example URIScheme string string URIScheme identifies the scheme used for connection to a host for Get actions +enum","title":" URIScheme"},{"location":"executor_swagger/#usercontainer","text":"Properties Name Type Go type Required Default Description Example args []string []string Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional command []string []string Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell +optional env [] EnvVar []*EnvVar List of environment variables to set in the container. Cannot be updated. +optional +patchMergeKey=name +patchStrategy=merge envFrom [] EnvFromSource []*EnvFromSource List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. +optional image string string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. +optional imagePullPolicy PullPolicy PullPolicy lifecycle Lifecycle Lifecycle livenessProbe Probe Probe mirrorVolumeMounts boolean bool MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports [] ContainerPort []*ContainerPort List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. +optional +patchMergeKey=containerPort +patchStrategy=merge +listType=map +listMapKey=containerPort +listMapKey=protocol readinessProbe Probe Probe resources ResourceRequirements ResourceRequirements securityContext SecurityContext SecurityContext startupProbe Probe Probe stdin boolean bool Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. +optional stdinOnce boolean bool Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false +optional terminationMessagePath string string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. +optional terminationMessagePolicy TerminationMessagePolicy TerminationMessagePolicy tty boolean bool Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. +optional volumeDevices [] VolumeDevice []*VolumeDevice volumeDevices is the list of block devices to be used by the container. +patchMergeKey=devicePath +patchStrategy=merge +optional volumeMounts [] VolumeMount []*VolumeMount Pod volumes to mount into the container's filesystem. Cannot be updated. +optional +patchMergeKey=mountPath +patchStrategy=merge workingDir string string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. +optional","title":" UserContainer"},{"location":"executor_swagger/#valuefrom","text":"ValueFrom describes a location in which to obtain the value to a parameter Properties Name Type Go type Required Default Description Example configMapKeyRef ConfigMapKeySelector ConfigMapKeySelector default AnyString AnyString event string string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string string JQFilter expression against the resource object in resource templates jsonPath string string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom SuppliedValueFrom","title":" ValueFrom"},{"location":"executor_swagger/#volume","text":"Properties Name Type Go type Required Default Description Example awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStoreVolumeSource azureDisk AzureDiskVolumeSource AzureDiskVolumeSource azureFile AzureFileVolumeSource AzureFileVolumeSource cephfs CephFSVolumeSource CephFSVolumeSource cinder CinderVolumeSource CinderVolumeSource configMap ConfigMapVolumeSource ConfigMapVolumeSource csi CSIVolumeSource CSIVolumeSource downwardAPI DownwardAPIVolumeSource DownwardAPIVolumeSource emptyDir EmptyDirVolumeSource EmptyDirVolumeSource ephemeral EphemeralVolumeSource EphemeralVolumeSource fc FCVolumeSource FCVolumeSource flexVolume FlexVolumeSource FlexVolumeSource flocker FlockerVolumeSource FlockerVolumeSource gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDiskVolumeSource gitRepo GitRepoVolumeSource GitRepoVolumeSource glusterfs GlusterfsVolumeSource GlusterfsVolumeSource hostPath HostPathVolumeSource HostPathVolumeSource iscsi ISCSIVolumeSource ISCSIVolumeSource name string string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFSVolumeSource persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDiskVolumeSource portworxVolume PortworxVolumeSource PortworxVolumeSource projected ProjectedVolumeSource ProjectedVolumeSource quobyte QuobyteVolumeSource QuobyteVolumeSource rbd RBDVolumeSource RBDVolumeSource scaleIO ScaleIOVolumeSource ScaleIOVolumeSource secret SecretVolumeSource SecretVolumeSource storageos StorageOSVolumeSource StorageOSVolumeSource vsphereVolume VsphereVirtualDiskVolumeSource VsphereVirtualDiskVolumeSource","title":" Volume"},{"location":"executor_swagger/#volumedevice","text":"Properties Name Type Go type Required Default Description Example devicePath string string devicePath is the path inside of the container that the device will be mapped to. name string string name must match the name of a persistentVolumeClaim in the pod","title":" VolumeDevice"},{"location":"executor_swagger/#volumemount","text":"Properties Name Type Go type Required Default Description Example mountPath string string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation MountPropagationMode MountPropagationMode name string string This must match the Name of a Volume. readOnly boolean bool Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. +optional subPath string string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). +optional subPathExpr string string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. +optional","title":" VolumeMount"},{"location":"executor_swagger/#volumeprojection","text":"Projection that may be projected along with other supported volume types Properties Name Type Go type Required Default Description Example configMap ConfigMapProjection ConfigMapProjection downwardAPI DownwardAPIProjection DownwardAPIProjection secret SecretProjection SecretProjection serviceAccountToken ServiceAccountTokenProjection ServiceAccountTokenProjection","title":" VolumeProjection"},{"location":"executor_swagger/#vspherevirtualdiskvolumesource","text":"Properties Name Type Go type Required Default Description Example fsType string string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. +optional storagePolicyID string string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. +optional storagePolicyName string string storagePolicyName is the storage Policy Based Management (SPBM) profile name. +optional volumePath string string volumePath is the path that identifies vSphere volume vmdk","title":" VsphereVirtualDiskVolumeSource"},{"location":"executor_swagger/#weightedpodaffinityterm","text":"The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Properties Name Type Go type Required Default Description Example podAffinityTerm PodAffinityTerm PodAffinityTerm weight int32 (formatted integer) int32 weight associated with matching the corresponding podAffinityTerm, in the range 1-100.","title":" WeightedPodAffinityTerm"},{"location":"executor_swagger/#windowssecuritycontextoptions","text":"Properties Name Type Go type Required Default Description Example gmsaCredentialSpec string string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. +optional gmsaCredentialSpecName string string GMSACredentialSpecName is the name of the GMSA credential spec to use. +optional hostProcess boolean bool HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. +optional runAsUserName string string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. +optional","title":" WindowsSecurityContextOptions"},{"location":"executor_swagger/#workflow","text":"Properties Name Type Go type Required Default Description Example metadata ObjectMeta ObjectMeta \u2713","title":" Workflow"},{"location":"executor_swagger/#zipstrategy","text":"ZipStrategy will unzip zipped input artifacts interface{}","title":" ZipStrategy"},{"location":"faq/","text":"FAQ \u00b6 \"token not valid\", \"any bearer token is able to login in the UI or use the API\" \u00b6 You may not have configured Argo Server authentication correctly. If you want SSO, try running with --auth-mode=sso . If you're using --auth-mode=client , make sure you have Bearer in front of the ServiceAccount Secret, as mentioned in Access Token . Learn more about the Argo Server set-up Argo Server return EOF error \u00b6 Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. Try changing your URL to HTTPS, or start Argo Server using --secure=false . My workflow hangs \u00b6 Check your wait container logs: Is there an RBAC error? Learn more about workflow RBAC Return \"unknown (get pods)\" error \u00b6 You're probably getting a permission denied error because your RBAC is not configured. Learn more about workflow RBAC and even more details There is an error about /var/run/docker.sock \u00b6 Try using a different container runtime executor. Learn more about executors","title":"FAQ"},{"location":"faq/#faq","text":"","title":"FAQ"},{"location":"faq/#token-not-valid-any-bearer-token-is-able-to-login-in-the-ui-or-use-the-api","text":"You may not have configured Argo Server authentication correctly. If you want SSO, try running with --auth-mode=sso . If you're using --auth-mode=client , make sure you have Bearer in front of the ServiceAccount Secret, as mentioned in Access Token . Learn more about the Argo Server set-up","title":"\"token not valid\", \"any bearer token is able to login in the UI or use the API\""},{"location":"faq/#argo-server-return-eof-error","text":"Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. Try changing your URL to HTTPS, or start Argo Server using --secure=false .","title":"Argo Server return EOF error"},{"location":"faq/#my-workflow-hangs","text":"Check your wait container logs: Is there an RBAC error? Learn more about workflow RBAC","title":"My workflow hangs"},{"location":"faq/#return-unknown-get-pods-error","text":"You're probably getting a permission denied error because your RBAC is not configured. Learn more about workflow RBAC and even more details","title":"Return \"unknown (get pods)\" error"},{"location":"faq/#there-is-an-error-about-varrundockersock","text":"Try using a different container runtime executor. Learn more about executors","title":"There is an error about /var/run/docker.sock"},{"location":"fields/","text":"Field Reference \u00b6 Workflow \u00b6 Workflow is the definition of a workflow resource Examples (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`daemoned-stateful-set-with-service.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemoned-stateful-set-with-service.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-jobs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-jobs.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-orchestration.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-orchestration.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch-basic.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch-basic.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-resource-log-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-resource-log-selector.yaml) - [`k8s-set-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-set-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resource-delete-with-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-delete-with-flags.yaml) - [`resource-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-flags.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available status WorkflowStatus No description available CronWorkflow \u00b6 CronWorkflow is the definition of a scheduled workflow resource Examples (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec CronWorkflowSpec No description available status CronWorkflowStatus No description available WorkflowTemplate \u00b6 WorkflowTemplate is the definition of a workflow template resource Examples (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available WorkflowSpec \u00b6 WorkflowSpec is the specification of a Workflow. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description activeDeadlineSeconds integer Optional duration in seconds relative to the workflow start time which the workflow is allowed to run before the controller terminates the io.argoproj.workflow.v1alpha1. A value of zero is used to terminate a Running workflow affinity Affinity Affinity sets the scheduling constraints for all pods in the io.argoproj.workflow.v1alpha1. Can be overridden by an affinity specified in the template archiveLogs boolean ArchiveLogs indicates if the container logs should be archived arguments Arguments Arguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{io.argoproj.workflow.v1alpha1.parameters.myparam}} artifactGC WorkflowLevelArtifactGC ArtifactGC describes the strategy to use when deleting artifacts from completed or deleted workflows (applies to all output Artifacts unless Artifact.ArtifactGC is specified, which overrides this) artifactRepositoryRef ArtifactRepositoryRef ArtifactRepositoryRef specifies the configMap name and key containing the artifact repository config. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. dnsConfig PodDNSConfig PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. entrypoint string Entrypoint is a template reference to the starting point of the io.argoproj.workflow.v1alpha1. executor ExecutorConfig Executor holds configurations of executor containers of the io.argoproj.workflow.v1alpha1. hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step hostAliases Array< HostAlias > No description available hostNetwork boolean Host networking requested for this workflow pod. Default to false. imagePullSecrets Array< LocalObjectReference > ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod metrics Metrics Metrics are a list of metrics emitted from this Workflow nodeSelector Map< string , string > NodeSelector is a selector which will result in all pods of the workflow to be scheduled on the selected node(s). This is able to be overridden by a nodeSelector specified in the template. onExit string OnExit is a template reference which is invoked at the end of the workflow, irrespective of the success, failure, or error of the primary io.argoproj.workflow.v1alpha1. parallelism integer Parallelism limits the max total parallel pods that can execute at the same time in a workflow podDisruptionBudget PodDisruptionBudgetSpec PodDisruptionBudget holds the number of concurrent disruptions that you allow for Workflow's Pods. Controller will automatically add the selector with workflow name, if selector is empty. Optional: Defaults to empty. podGC PodGC PodGC describes the strategy to use when deleting completed pods podMetadata Metadata PodMetadata defines additional metadata that should be applied to workflow pods ~~ podPriority ~~ ~~ integer ~~ ~~Priority to apply to workflow pods.~~ DEPRECATED: Use PodPriorityClassName instead. podPriorityClassName string PriorityClassName to apply to workflow pods. podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first. retryStrategy RetryStrategy RetryStrategy for all templates in the io.argoproj.workflow.v1alpha1. schedulerName string Set scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified. securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to run all pods of the workflow as. shutdown string Shutdown will shutdown the workflow according to its ShutdownStrategy suspend boolean Suspend will suspend the workflow and prevent execution of any future steps in the workflow synchronization Synchronization Synchronization holds synchronization lock configuration for this Workflow templateDefaults Template TemplateDefaults holds default template values that will apply to all templates in the Workflow, unless overridden on the template-level templates Array< Template > Templates is a list of workflow templates used in a workflow tolerations Array< Toleration > Tolerations to apply to workflow pods. ttlStrategy TTLStrategy TTLStrategy limits the lifetime of a Workflow that has finished execution depending on if it Succeeded or Failed. If this struct is set, once the Workflow finishes, it will be deleted after the time to live expires. If this field is unset, the controller config map will hold the default values. volumeClaimGC VolumeClaimGC VolumeClaimGC describes the strategy to use when deleting volumes from completed workflows volumeClaimTemplates Array< PersistentVolumeClaim > VolumeClaimTemplates is a list of claims that containers are allowed to reference. The Workflow controller will create the claims at the beginning of the workflow and delete the claims upon completion of the workflow volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a io.argoproj.workflow.v1alpha1. workflowMetadata WorkflowMetadata WorkflowMetadata contains some metadata of the workflow to refer to workflowTemplateRef WorkflowTemplateRef WorkflowTemplateRef holds a reference to a WorkflowTemplate for execution WorkflowStatus \u00b6 WorkflowStatus contains overall status information about a workflow Fields \u00b6 Field Name Field Type Description artifactGCStatus ArtGCStatus ArtifactGCStatus maintains the status of Artifact Garbage Collection artifactRepositoryRef ArtifactRepositoryRefStatus ArtifactRepositoryRef is used to cache the repository to use so we do not need to determine it everytime we reconcile. compressedNodes string Compressed and base64 decoded Nodes map conditions Array< Condition > Conditions is a list of conditions the Workflow may have estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this workflow completed message string A human readable message indicating details about why the workflow is in this condition. nodes NodeStatus Nodes is a mapping between a node ID and the node's status. offloadNodeStatusVersion string Whether on not node status has been offloaded to a database. If exists, then Nodes and CompressedNodes will be empty. This will actually be populated with a hash of the offloaded data. outputs Outputs Outputs captures output values and artifact locations produced by the workflow via global outputs persistentVolumeClaims Array< Volume > PersistentVolumeClaims tracks all PVCs that were created as part of the io.argoproj.workflow.v1alpha1. The contents of this list are drained at the end of the workflow. phase string Phase a simple, high-level summary of where the workflow is in its lifecycle. Will be \"\" (Unknown), \"Pending\", or \"Running\" before the workflow is completed, and \"Succeeded\", \"Failed\" or \"Error\" once the workflow has completed. progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is the total for the workflow startedAt Time Time at which this workflow started storedTemplates Template StoredTemplates is a mapping between a template ref and the node's status. storedWorkflowTemplateSpec WorkflowSpec StoredWorkflowSpec stores the WorkflowTemplate spec for future execution. synchronization SynchronizationStatus Synchronization stores the status of synchronization locks taskResultsCompleted Map< boolean , string > Have task results been completed? (mapped by Pod name) used to prevent premature garbage collection of artifacts. CronWorkflowSpec \u00b6 CronWorkflowSpec is the specification of a CronWorkflow Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description concurrencyPolicy string ConcurrencyPolicy is the K8s-style concurrency policy that will be used failedJobsHistoryLimit integer FailedJobsHistoryLimit is the number of failed jobs to be kept at a time schedule string Schedule is a schedule to run the Workflow in Cron format startingDeadlineSeconds integer StartingDeadlineSeconds is the K8s-style deadline that will limit the time a CronWorkflow will be run after its original scheduled time if it is missed. successfulJobsHistoryLimit integer SuccessfulJobsHistoryLimit is the number of successful jobs to be kept at a time suspend boolean Suspend is a flag that will stop new CronWorkflows from running if set to true timezone string Timezone is the timezone against which the cron schedule will be calculated, e.g. \"Asia/Tokyo\". Default is machine's local time. workflowMetadata ObjectMeta WorkflowMetadata contains some metadata of the workflow to be run workflowSpec WorkflowSpec WorkflowSpec is the spec of the workflow to be run CronWorkflowStatus \u00b6 CronWorkflowStatus is the status of a CronWorkflow Fields \u00b6 Field Name Field Type Description active Array< ObjectReference > Active is a list of active workflows stemming from this CronWorkflow conditions Array< Condition > Conditions is a list of conditions the CronWorkflow may have lastScheduledTime Time LastScheduleTime is the last time the CronWorkflow was scheduled Arguments \u00b6 Arguments to a template Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) Fields \u00b6 Field Name Field Type Description artifacts Array< Artifact > Artifacts is the list of artifacts to pass to the template or workflow parameters Array< Parameter > Parameters is the list of parameters to pass to the template or workflow WorkflowLevelArtifactGC \u00b6 WorkflowLevelArtifactGC describes how to delete artifacts from completed Workflows - this spec is used on the Workflow level Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) Fields \u00b6 Field Name Field Type Description forceFinalizerRemoval boolean ForceFinalizerRemoval: if set to true, the finalizer will be removed in the case that Artifact GC fails podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the artgc pod spec. serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use. ArtifactRepositoryRef \u00b6 No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) Fields \u00b6 Field Name Field Type Description configMap string The name of the config map. Defaults to \"artifact-repositories\". key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation. ExecutorConfig \u00b6 ExecutorConfig holds configurations of an executor container. Fields \u00b6 Field Name Field Type Description serviceAccountName string ServiceAccountName specifies the service account name of the executor container. LifecycleHook \u00b6 No description available Examples with this field (click to open) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) Fields \u00b6 Field Name Field Type Description arguments Arguments Arguments hold arguments to the template expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef is the reference to the template resource to execute by the hook Metrics \u00b6 Metrics are a list of metrics emitted from a Workflow/Template Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description prometheus Array< Prometheus > Prometheus is a list of prometheus metrics to be emitted PodGC \u00b6 PodGC describes how to delete completed pods as they complete Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) Fields \u00b6 Field Name Field Type Description deleteDelayDuration Duration DeleteDelayDuration specifies the duration before pods in the GC queue get deleted. labelSelector LabelSelector LabelSelector is the label selector to check if the pods match the labels before being added to the pod GC queue. strategy string Strategy is the strategy to use. One of \"OnPodCompletion\", \"OnPodSuccess\", \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". If unset, does not delete Pods Metadata \u00b6 Pod metdata Examples with this field (click to open) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) Fields \u00b6 Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available RetryStrategy \u00b6 RetryStrategy provides controls on how to retry a workflow step Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description affinity RetryAffinity Affinity prevents running workflow's step on the same host backoff Backoff Backoff is a backoff strategy expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString Limit is the maximum number of retry attempts when retrying a container. It does not include the original container; the maximum number of total attempts will be limit + 1 . retryPolicy string RetryPolicy is a policy of NodePhase statuses that will be retried Synchronization \u00b6 Synchronization holds synchronization lock configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description mutex Mutex Mutex holds the Mutex lock details semaphore SemaphoreRef Semaphore holds the Semaphore configuration Template \u00b6 Template is a reusable and composable unit of execution in a workflow Examples with this field (click to open) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) Fields \u00b6 Field Name Field Type Description activeDeadlineSeconds IntOrString Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates. affinity Affinity Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any) archiveLocation ArtifactLocation Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container is the main container image to run in the pod containerSet ContainerSetTemplate ContainerSet groups multiple containers within a single pod. daemon boolean Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAG template subtype which runs a DAG data Data Data is a data template executor ExecutorConfig Executor holds configurations of the executor container. failFast boolean FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases Array< HostAlias > HostAliases is an optional list of hosts and IPs that will be injected into the pod spec http HTTP HTTP makes a HTTP request initContainers Array< UserContainer > InitContainers is a list of containers which run before the main container. inputs Inputs Inputs describe what inputs parameters and artifacts are supplied to this template memoize Memoize Memoize allows templates to use outputs generated from already executed templates metadata Metadata Metdata sets the pods's metadata, i.e. annotations and labels metrics Metrics Metrics are a list of metrics emitted from this template name string Name is the name of the template nodeSelector Map< string , string > NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs describe the parameters and artifacts that this template produces parallelism integer Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin is a plugin template podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority to apply to workflow pods. priorityClassName string PriorityClassName to apply to workflow pods. resource ResourceTemplate Resource template subtype which can run k8s resources retryStrategy RetryStrategy RetryStrategy describes how to retry a template when it fails schedulerName string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. script ScriptTemplate Script runs a portion of code against an interpreter securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName to apply to workflow pods sidecars Array< UserContainer > Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes steps Array> Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate Suspend template subtype which can suspend a workflow when reaching the step synchronization Synchronization Synchronization holds synchronization lock configuration for this template timeout string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations Array< Toleration > Tolerations to apply to workflow pods. volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a template. TTLStrategy \u00b6 TTLStrategy is the strategy for the time to live depending on if the workflow succeeded or failed Examples with this field (click to open) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) Fields \u00b6 Field Name Field Type Description secondsAfterCompletion integer SecondsAfterCompletion is the number of seconds to live after completion secondsAfterFailure integer SecondsAfterFailure is the number of seconds to live after failure secondsAfterSuccess integer SecondsAfterSuccess is the number of seconds to live after success VolumeClaimGC \u00b6 VolumeClaimGC describes how to delete volumes from completed Workflows Fields \u00b6 Field Name Field Type Description strategy string Strategy is the strategy to use. One of \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". Defaults to \"OnWorkflowSuccess\" WorkflowMetadata \u00b6 No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) Fields \u00b6 Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available labelsFrom LabelValueFrom No description available WorkflowTemplateRef \u00b6 WorkflowTemplateRef is a reference to a WorkflowTemplate resource. Examples with this field (click to open) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the workflow template. ArtGCStatus \u00b6 ArtGCStatus maintains state related to ArtifactGC Fields \u00b6 Field Name Field Type Description notSpecified boolean if this is true, we already checked to see if we need to do it and we don't podsRecouped Map< boolean , string > have completed Pods been processed? (mapped by Pod name) used to prevent re-processing the Status of a Pod more than once strategiesProcessed Map< boolean , string > have Pods been started to perform this strategy? (enables us not to re-process what we've already done) ArtifactRepositoryRefStatus \u00b6 No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) Fields \u00b6 Field Name Field Type Description artifactRepository ArtifactRepository The repository the workflow will use. This maybe empty before v3.1. configMap string The name of the config map. Defaults to \"artifact-repositories\". default boolean If this ref represents the default artifact repository, rather than a config map. key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation. namespace string The namespace of the config map. Defaults to the workflow's namespace, or the controller's namespace (if found). Condition \u00b6 No description available Fields \u00b6 Field Name Field Type Description message string Message is the condition message status string Status is the status of the condition type string Type is the type of condition NodeStatus \u00b6 NodeStatus contains status information about an individual node in the workflow Fields \u00b6 Field Name Field Type Description boundaryID string BoundaryID indicates the node ID of the associated template root node in which this node belongs to children Array< string > Children is a list of child node IDs daemoned boolean Daemoned tracks whether or not this node was daemoned and need to be terminated displayName string DisplayName is a human readable representation of the node. Unique within a template boundary estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this node completed hostNodeName string HostNodeName name of the Kubernetes node on which the Pod is running, if applicable id string ID is a unique identifier of a node within the worklow It is implemented as a hash of the node name, which makes the ID deterministic inputs Inputs Inputs captures input parameter values and artifact locations supplied to this template invocation memoizationStatus MemoizationStatus MemoizationStatus holds information about cached nodes message string A human readable message indicating details about why the node is in this condition. name string Name is unique name in the node tree used to generate the node ID nodeFlag NodeFlag NodeFlag tracks some history of node. e.g.) hooked, retried, etc. outboundNodes Array< string > OutboundNodes tracks the node IDs which are considered \"outbound\" nodes to a template invocation. For every invocation of a template, there are nodes which we considered as \"outbound\". Essentially, these are last nodes in the execution sequence to run, before the template is considered completed. These nodes are then connected as parents to a following step. In the case of single pod steps (i.e. container, script, resource templates), this list will be nil since the pod itself is already considered the \"outbound\" node. In the case of DAGs, outbound nodes are the \"target\" tasks (tasks with no children). In the case of steps, outbound nodes are all the containers involved in the last step group. NOTE: since templates are composable, the list of outbound nodes are carried upwards when a DAG/steps template invokes another DAG/steps template. In other words, the outbound nodes of a template, will be a superset of the outbound nodes of its last children. outputs Outputs Outputs captures output parameter values and artifact locations produced by this template invocation phase string Phase a simple, high-level summary of where the node is in its lifecycle. Can be used as a state machine. Will be one of these values \"Pending\", \"Running\" before the node is completed, or \"Succeeded\", \"Skipped\", \"Failed\", \"Error\", or \"Omitted\" as a final state. podIP string PodIP captures the IP of the pod for daemoned steps progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is indicative, but not accurate, resource duration. This is populated when the nodes completes. startedAt Time Time at which this node started synchronizationStatus NodeSynchronizationStatus SynchronizationStatus is the synchronization status of the node templateName string TemplateName is the template name which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateRef TemplateRef TemplateRef is the reference to the template resource which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateScope string TemplateScope is the template scope in which the template of this node was retrieved. type string Type indicates type of node Outputs \u00b6 Outputs hold parameters, artifacts, and results from a step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description artifacts Array< Artifact > Artifacts holds the list of output artifacts produced by a step exitCode string ExitCode holds the exit code of a script template parameters Array< Parameter > Parameters holds the list of output parameters produced by a step result string Result holds the result (stdout) of a script template SynchronizationStatus \u00b6 SynchronizationStatus stores the status of semaphore and mutex. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description mutex MutexStatus Mutex stores this workflow's mutex holder details semaphore SemaphoreStatus Semaphore stores this workflow's Semaphore holder details Artifact \u00b6 Artifact indicates an artifact to place at a specified path Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source Parameter \u00b6 Parameter indicate a passed string parameter to a service template with an optional default value Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) Fields \u00b6 Field Name Field Type Description default string Default is the default value to use for an input parameter if a value was not supplied description string Description is the parameter description enum Array< string > Enum holds a list of string values to choose from, for the actual value of the parameter globalName string GlobalName exports an output parameter to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string Name is the parameter name value string Value is the literal value to use for the parameter. If specified in the context of an input parameter, the value takes precedence over any passed values valueFrom ValueFrom ValueFrom is the source for the output parameter's value TemplateRef \u00b6 TemplateRef is a reference of template resource. Examples with this field (click to open) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the template. template string Template is the name of referred template in the resource. Prometheus \u00b6 Prometheus is a prometheus metric to be emitted Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description counter Counter Counter is a counter metric gauge Gauge Gauge is a gauge metric help string Help is a string that describes the metric histogram Histogram Histogram is a histogram metric labels Array< MetricLabel > Labels is a list of metric labels name string Name is the name of the metric when string When is a conditional statement that decides when to emit the metric RetryAffinity \u00b6 RetryAffinity prevents running steps on the same host. Fields \u00b6 Field Name Field Type Description nodeAntiAffinity RetryNodeAntiAffinity No description available Backoff \u00b6 Backoff is a backoff strategy to use within retryStrategy Examples with this field (click to open) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) Fields \u00b6 Field Name Field Type Description duration string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString Factor is a factor to multiply the base duration after each failed retry maxDuration string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy Mutex \u00b6 Mutex holds Mutex configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description name string name of the mutex namespace string Namespace is the namespace of the mutex, default: [namespace of workflow] SemaphoreRef \u00b6 SemaphoreRef is a reference of Semaphore Fields \u00b6 Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for Semaphore configuration namespace string Namespace is the namespace of the configmap, default: [namespace of workflow] ArtifactLocation \u00b6 ArtifactLocation describes a location for a single or multiple artifacts. It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) Fields \u00b6 Field Name Field Type Description archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details oss OSSArtifact OSS contains OSS artifact location details raw RawArtifact Raw contains raw artifact location details s3 S3Artifact S3 contains S3 artifact location details ContainerSetTemplate \u00b6 No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) Fields \u00b6 Field Name Field Type Description containers Array< ContainerNode > No description available retryStrategy ContainerSetRetryStrategy RetryStrategy describes how to retry a container nodes in the container set if it fails. Nbr of retries(default 0) and sleep duration between retries(default 0s, instant retry) can be set. volumeMounts Array< VolumeMount > No description available DAGTemplate \u00b6 DAGTemplate is a template subtype for directed acyclic graph templates Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description failFast boolean This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string Target are one or more names of targets to execute in a DAG tasks Array< DAGTask > Tasks are a list of DAG tasks Data \u00b6 Data is a data template Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) Fields \u00b6 Field Name Field Type Description source DataSource Source sources external data into a data template transformation Array< TransformationStep > Transformation applies a set of transformations HTTP \u00b6 No description available Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description body string Body is content of the HTTP Request bodyFrom HTTPBodySource BodyFrom is content of the HTTP Request as Bytes headers Array< HTTPHeader > Headers are an optional list of headers to send with HTTP requests insecureSkipVerify boolean InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string Method is HTTP methods for HTTP Request successCondition string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds integer TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string URL of the HTTP Request UserContainer \u00b6 UserContainer is a container specified by a user. Examples with this field (click to open) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes mirrorVolumeMounts boolean MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. Inputs \u00b6 Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description artifacts Array< Artifact > Artifact are a list of artifacts passed as inputs parameters Array< Parameter > Parameters are a list of parameters passed as inputs Memoize \u00b6 Memoization enables caching for the Outputs of the template Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description cache Cache Cache sets and configures the kind of cache key string Key is the key to use as the caching key maxAge string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored. Plugin \u00b6 Plugin is an Object with exactly one key ResourceTemplate \u00b6 ResourceTemplate is a template subtype to manipulate kubernetes resources Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) Fields \u00b6 Field Name Field Type Description action string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags Array< string > Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom is the source for a single kubernetes manifest mergeStrategy string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step ScriptTemplate \u00b6 ScriptTemplate is a template subtype to enable scripting through code steps Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ source string Source contains the source code of the script to execute startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. WorkflowStep \u00b6 WorkflowStep is a reference to a template to execute in a series of step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description arguments Arguments Arguments hold arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name of the step ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Template is the name of the template to execute as the step templateRef TemplateRef TemplateRef is the reference to the template resource to execute as the step. when string When is an expression in which the step should conditionally execute withItems Array< Item > WithItems expands a step into multiple parallel steps from the items in the list withParam string WithParam expands a step into multiple parallel steps from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a step into a numeric sequence SuspendTemplate \u00b6 SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Examples with this field (click to open) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) Fields \u00b6 Field Name Field Type Description duration string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" LabelValueFrom \u00b6 No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) Fields \u00b6 Field Name Field Type Description expression string No description available ArtifactRepository \u00b6 ArtifactRepository represents an artifact repository in which a controller will store its artifacts Fields \u00b6 Field Name Field Type Description archiveLogs boolean ArchiveLogs enables log archiving artifactory ArtifactoryArtifactRepository Artifactory stores artifacts to JFrog Artifactory azure AzureArtifactRepository Azure stores artifact in an Azure Storage account gcs GCSArtifactRepository GCS stores artifact in a GCS object store hdfs HDFSArtifactRepository HDFS stores artifacts in HDFS oss OSSArtifactRepository OSS stores artifact in a OSS-compliant object store s3 S3ArtifactRepository S3 stores artifact in a S3-compliant object store MemoizationStatus \u00b6 MemoizationStatus is the status of this memoized node Fields \u00b6 Field Name Field Type Description cacheName string Cache is the name of the cache that was used hit boolean Hit indicates whether this node was created from a cache entry key string Key is the name of the key used for this node's cache NodeFlag \u00b6 No description available Fields \u00b6 Field Name Field Type Description hooked boolean Hooked tracks whether or not this node was triggered by hook or onExit retried boolean Retried tracks whether or not this node was retried by retryStrategy NodeSynchronizationStatus \u00b6 NodeSynchronizationStatus stores the status of a node Fields \u00b6 Field Name Field Type Description waiting string Waiting is the name of the lock that this node is waiting for MutexStatus \u00b6 MutexStatus contains which objects hold mutex locks, and which objects this workflow is waiting on to release locks. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) Fields \u00b6 Field Name Field Type Description holding Array< MutexHolding > Holding is a list of mutexes and their respective objects that are held by mutex lock for this io.argoproj.workflow.v1alpha1. waiting Array< MutexHolding > Waiting is a list of mutexes and their respective objects this workflow is waiting for. SemaphoreStatus \u00b6 No description available Fields \u00b6 Field Name Field Type Description holding Array< SemaphoreHolding > Holding stores the list of resource acquired synchronization lock for workflows. waiting Array< SemaphoreHolding > Waiting indicates the list of current synchronization lock holders. ArchiveStrategy \u00b6 ArchiveStrategy describes how to archive files/directory when saving artifacts Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) Fields \u00b6 Field Name Field Type Description none NoneStrategy No description available tar TarStrategy No description available zip ZipStrategy No description available ArtifactGC \u00b6 ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) Fields \u00b6 Field Name Field Type Description podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use. ArtifactoryArtifact \u00b6 ArtifactoryArtifact is the location of an artifactory artifact Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) Fields \u00b6 Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password url string URL of the artifact usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username AzureArtifact \u00b6 AzureArtifact is the location of a an Azure Storage artifact Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) Fields \u00b6 Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blob string Blob is the blob name (i.e., path) in the container where the artifact resides container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. GCSArtifact \u00b6 GCSArtifact is the location of a GCS artifact Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) Fields \u00b6 Field Name Field Type Description bucket string Bucket is the name of the bucket key string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key GitArtifact \u00b6 GitArtifact is the location of an git artifact Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) Fields \u00b6 Field Name Field Type Description branch string Branch is the branch to fetch when SingleBranch is enabled depth integer Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean DisableSubmodules disables submodules during git clone fetch Array< string > Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repo string Repo is the git repository revision string Revision is the git commit, tag, branch to checkout singleBranch boolean SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SSHPrivateKeySecret is the secret selector to the repository ssh private key usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username HDFSArtifact \u00b6 HDFSArtifact is the location of an HDFS artifact Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) Fields \u00b6 Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string Path is a file path in HDFS HTTPArtifact \u00b6 HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description auth HTTPAuth Auth contains information for client authentication headers Array< Header > Headers are an optional list of headers to send with HTTP requests for artifacts url string URL of the artifact OSSArtifact \u00b6 OSSArtifact is the location of an Alibaba Cloud OSS artifact Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint key string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. RawArtifact \u00b6 RawArtifact allows raw string content to be placed as an artifact in a container Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) Fields \u00b6 Field Name Field Type Description data string Data is the string contents of the artifact S3Artifact \u00b6 S3Artifact is the location of an S3 artifact Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS key string Key is the key in the bucket where the artifact resides region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. ValueFrom \u00b6 ValueFrom describes a location in which to obtain the value to a parameter Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Fields \u00b6 Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for input parameter configuration default string Default specifies a value to be used if retrieving the value from the specified source fails event string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string JQFilter expression against the resource object in resource templates jsonPath string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom Supplied value to be filled in directly, either through the CLI, API, etc. Counter \u00b6 Counter is a Counter prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description value string Value is the value of the metric Gauge \u00b6 Gauge is a Gauge prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description operation string Operation defines the operation to apply with value and the metrics' current value realtime boolean Realtime emits this metric in real time if applicable value string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric Histogram \u00b6 Histogram is a Histogram prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) Fields \u00b6 Field Name Field Type Description buckets Array< Amount > Buckets is a list of bucket divisors for the histogram value string Value is the value of the metric MetricLabel \u00b6 MetricLabel is a single label for a prometheus metric Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) Fields \u00b6 Field Name Field Type Description key string No description available value string No description available RetryNodeAntiAffinity \u00b6 RetryNodeAntiAffinity is a placeholder for future expansion, only empty nodeAntiAffinity is allowed. In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\". ContainerNode \u00b6 No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell dependencies Array< string > No description available env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. ContainerSetRetryStrategy \u00b6 No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description duration string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString Nbr of retries DAGTask \u00b6 DAGTask represents a node in the graph during DAG execution Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Fields \u00b6 Field Name Field Type Description arguments Arguments Arguments are the parameter and artifact arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified dependencies Array< string > Dependencies are name of other targets which this depends on depends string Depends are name of other targets which this depends on hooks LifecycleHook Hooks hold the lifecycle hook which is invoked at lifecycle of task, irrespective of the success, failure, or error status of the primary task inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name is the name of the target ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Name of template to execute templateRef TemplateRef TemplateRef is the reference to the template resource to execute. when string When is an expression in which the task should conditionally execute withItems Array< Item > WithItems expands a task into multiple parallel tasks from the items in the list withParam string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a task into a numeric sequence DataSource \u00b6 DataSource sources external data into a data template Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description artifactPaths ArtifactPaths ArtifactPaths is a data transformation that collects a list of artifact paths TransformationStep \u00b6 No description available Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) Fields \u00b6 Field Name Field Type Description expression string Expression defines an expr expression to apply HTTPBodySource \u00b6 HTTPBodySource contains the source of the HTTP body. Fields \u00b6 Field Name Field Type Description bytes byte No description available HTTPHeader \u00b6 No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description name string No description available value string No description available valueFrom HTTPHeaderSource No description available Cache \u00b6 Cache is the configuration for the type of cache to be used Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description configMap ConfigMapKeySelector ConfigMap sets a ConfigMap-based cache ManifestFrom \u00b6 No description available Fields \u00b6 Field Name Field Type Description artifact Artifact Artifact contains the artifact to use ContinueOn \u00b6 ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both. Examples with this field (click to open) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) Fields \u00b6 Field Name Field Type Description error boolean No description available failed boolean No description available Item \u00b6 Item expands a single workflow step into multiple parallel steps The value of Item can be a map, string, bool, or number Examples with this field (click to open) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) Sequence \u00b6 Sequence expands a workflow step into numeric range Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description count IntOrString Count is number of elements in the sequence (default: 0). Not to be used with end end IntOrString Number at which to end the sequence (default: 0). Not to be used with Count format string Format is a printf format string to format the value in the sequence start IntOrString Number at which to start the sequence (default: 0) ArtifactoryArtifactRepository \u00b6 ArtifactoryArtifactRepository defines the controller configuration for an artifactory artifact repository Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) Fields \u00b6 Field Name Field Type Description keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repoURL string RepoURL is the url for artifactory repo. usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username AzureArtifactRepository \u00b6 AzureArtifactRepository defines the controller configuration for an Azure Blob Storage artifact repository Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) Fields \u00b6 Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blobNameFormat string BlobNameFormat is defines the format of how to store blob names. Can reference workflow variables container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. GCSArtifactRepository \u00b6 GCSArtifactRepository defines the controller configuration for a GCS artifact repository Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) Fields \u00b6 Field Name Field Type Description bucket string Bucket is the name of the bucket keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key HDFSArtifactRepository \u00b6 HDFSArtifactRepository defines the controller configuration for an HDFS artifact repository Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) Fields \u00b6 Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. pathFormat string PathFormat is defines the format of path to store a file. Can reference workflow variables OSSArtifactRepository \u00b6 OSSArtifactRepository defines the controller configuration for an OSS artifact repository Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. S3ArtifactRepository \u00b6 S3ArtifactRepository defines the controller configuration for an S3 artifact repository Fields \u00b6 Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. ~~ keyPrefix ~~ ~~ string ~~ ~~KeyPrefix is prefix used as part of the bucket key in which the controller will store artifacts.~~ DEPRECATED. Use KeyFormat instead region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults. MutexHolding \u00b6 MutexHolding describes the mutex and the object which is holding it. Fields \u00b6 Field Name Field Type Description holder string Holder is a reference to the object which holds the Mutex. Holding Scenario: 1. Current workflow's NodeID which is holding the lock. e.g: ${NodeID} Waiting Scenario: 1. Current workflow or other workflow NodeID which is holding the lock. e.g: ${WorkflowName}/${NodeID} mutex string Reference for the mutex e.g: ${namespace}/mutex/${mutexName} SemaphoreHolding \u00b6 No description available Fields \u00b6 Field Name Field Type Description holders Array< string > Holders stores the list of current holder names in the io.argoproj.workflow.v1alpha1. semaphore string Semaphore stores the semaphore name. NoneStrategy \u00b6 NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) TarStrategy \u00b6 TarStrategy will tar and gzip the file or directory when saving Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) Fields \u00b6 Field Name Field Type Description compressionLevel integer CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression. ZipStrategy \u00b6 ZipStrategy will unzip zipped input artifacts HTTPAuth \u00b6 No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description basicAuth BasicAuth No description available clientCert ClientCertAuth No description available oauth2 OAuth2Auth No description available Header \u00b6 Header indicate a key-value request header to be used when fetching artifacts over HTTP Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description name string Name is the header name value string Value is the literal value to use for the header OSSLifecycleRule \u00b6 OSSLifecycleRule specifies how to manage bucket's lifecycle Fields \u00b6 Field Name Field Type Description markDeletionAfterDays integer MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays integer MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type CreateS3BucketOptions \u00b6 CreateS3BucketOptions options used to determine automatic automatic bucket-creation process Fields \u00b6 Field Name Field Type Description objectLocking boolean ObjectLocking Enable object locking S3EncryptionOptions \u00b6 S3EncryptionOptions used to determine encryption options during s3 operations Fields \u00b6 Field Name Field Type Description enableEncryption boolean EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector ServerSideCustomerKeySecret tells the driver to encrypt the output artifacts using SSE-C with the specified secret. SuppliedValueFrom \u00b6 SuppliedValueFrom is a placeholder for a value to be filled in directly, either through the CLI, API, etc. Examples with this field (click to open) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Amount \u00b6 Amount represent a numeric amount. Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) ArtifactPaths \u00b6 ArtifactPaths expands a step from a collection of artifacts Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) Fields \u00b6 Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source HTTPHeaderSource \u00b6 No description available Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Fields \u00b6 Field Name Field Type Description secretKeyRef SecretKeySelector No description available BasicAuth \u00b6 BasicAuth describes the secret selectors required for basic authentication Fields \u00b6 Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username ClientCertAuth \u00b6 ClientCertAuth holds necessary information for client authentication via certificates Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description clientCertSecret SecretKeySelector No description available clientKeySecret SecretKeySelector No description available OAuth2Auth \u00b6 OAuth2Auth holds all information for client authentication via OAuth2 tokens Fields \u00b6 Field Name Field Type Description clientIDSecret SecretKeySelector No description available clientSecretSecret SecretKeySelector No description available endpointParams Array< OAuth2EndpointParam > No description available scopes Array< string > No description available tokenURLSecret SecretKeySelector No description available OAuth2EndpointParam \u00b6 EndpointParam is for requesting optional fields that should be sent in the oauth request Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) Fields \u00b6 Field Name Field Type Description key string Name is the header name value string Value is the literal value to use for the header External Fields \u00b6 ObjectMeta \u00b6 ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description annotations Map< string , string > Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations clusterName string The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request. creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers Array< string > Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels Map< string , string > Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels managedFields Array< ManagedFieldsEntry > ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences Array< OwnerReference > List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency ~~ selfLink ~~ ~~ string ~~ ~~SelfLink is a URL representing this object. Populated by the system. Read-only.~~ DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids Affinity \u00b6 Affinity is a group of affinity scheduling rules. Fields \u00b6 Field Name Field Type Description nodeAffinity NodeAffinity Describes node affinity scheduling rules for the pod. podAffinity PodAffinity Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity PodAntiAffinity Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). PodDNSConfig \u00b6 PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) Fields \u00b6 Field Name Field Type Description nameservers Array< string > A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options Array< PodDNSConfigOption > A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. searches Array< string > A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. HostAlias \u00b6 HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Fields \u00b6 Field Name Field Type Description hostnames Array< string > Hostnames for the above IP address. ip string IP address of the host file entry. LocalObjectReference \u00b6 LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Examples with this field (click to open) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) Fields \u00b6 Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names PodDisruptionBudgetSpec \u00b6 PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Examples with this field (click to open) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) Fields \u00b6 Field Name Field Type Description maxUnavailable IntOrString An eviction is allowed if at most \"maxUnavailable\" pods selected by \"selector\" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with \"minAvailable\". minAvailable IntOrString An eviction is allowed if at least \"minAvailable\" pods selected by \"selector\" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying \"100%\". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace. PodSecurityContext \u00b6 PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) Fields \u00b6 Field Name Field Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are \"OnRootMismatch\" and \"Always\". If not specified, \"Always\" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups Array< integer > A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls Array< Sysctl > Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Toleration \u00b6 The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . Fields \u00b6 Field Name Field Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - \"NoExecute\" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - \"NoSchedule\" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - \"PreferNoSchedule\" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - \"Equal\" - \"Exists\" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. PersistentVolumeClaim \u00b6 PersistentVolumeClaim is a user's request for and claim to a persistent volume Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PersistentVolumeClaimSpec Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status PersistentVolumeClaimStatus Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Volume \u00b6 Volume represents a named volume in a pod that may be accessed by any container in the pod. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) Fields \u00b6 Field Name Field Type Description awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFileVolumeSource AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs CephFSVolumeSource CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderVolumeSource Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap ConfigMapVolumeSource ConfigMap represents a configMap that should populate this volume csi CSIVolumeSource CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI DownwardAPIVolumeSource DownwardAPI represents downward API about the pod that should populate this volume emptyDir EmptyDirVolumeSource EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral EphemeralVolumeSource Ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc FCVolumeSource FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexVolumeSource FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk ~~ gitRepo ~~ ~~ GitRepoVolumeSource ~~ ~~GitRepo represents a git repository at a particular revision.~~ DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs GlusterfsVolumeSource Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIVolumeSource ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected ProjectedVolumeSource Items for all in one resources secrets, configmaps, and downward API quobyte QuobyteVolumeSource Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDVolumeSource RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOVolumeSource ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret SecretVolumeSource Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos StorageOSVolumeSource StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume VsphereVirtualDiskVolumeSource VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Time \u00b6 Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. ObjectReference \u00b6 ObjectReference contains enough information to let you inspect or modify the referred object. Fields \u00b6 Field Name Field Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids Duration \u00b6 Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) Fields \u00b6 Field Name Field Type Description duration string No description available LabelSelector \u00b6 A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) Fields \u00b6 Field Name Field Type Description matchExpressions Array< LabelSelectorRequirement > matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels Map< string , string > matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed. IntOrString \u00b6 No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) Container \u00b6 A single application container that you want to run within a pod. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) Fields \u00b6 Field Name Field Type Description args Array< string > Arguments to the entrypoint. The docker image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - \"Always\" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - \"IfNotPresent\" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - \"Never\" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - \"FallbackToLogsOnError\" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - \"File\" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. ConfigMapKeySelector \u00b6 Selects a key from a ConfigMap. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) Fields \u00b6 Field Name Field Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined VolumeMount \u00b6 VolumeMount describes a mounting of a Volume within a container. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. EnvVar \u00b6 EnvVar represents an environment variable present in a Container. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) Fields \u00b6 Field Name Field Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty. EnvFromSource \u00b6 EnvFromSource represents the source of a set of ConfigMaps Fields \u00b6 Field Name Field Type Description configMapRef ConfigMapEnvSource The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef SecretEnvSource The Secret to select from Lifecycle \u00b6 Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Fields \u00b6 Field Name Field Type Description postStart LifecycleHandler PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop LifecycleHandler PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Probe \u00b6 Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Fields \u00b6 Field Name Field Type Description exec ExecAction Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc GRPCAction GRPC specifies an action involving a GRPC port. This is an alpha field and requires enabling GRPCContainerProbe feature gate. httpGet HTTPGetAction HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket TCPSocketAction TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes ContainerPort \u00b6 ContainerPort represents a network port in a single container. Fields \u00b6 Field Name Field Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to \"TCP\". Possible enum values: - \"SCTP\" is the SCTP protocol. - \"TCP\" is the TCP protocol. - \"UDP\" is the UDP protocol. ResourceRequirements \u00b6 ResourceRequirements describes the compute resource requirements. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) Fields \u00b6 Field Name Field Type Description limits Quantity Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests Quantity Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ SecurityContext \u00b6 SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) Fields \u00b6 Field Name Field Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities Capabilities The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. VolumeDevice \u00b6 volumeDevice describes a mapping of a raw block device within a container. Fields \u00b6 Field Name Field Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod SecretKeySelector \u00b6 SecretKeySelector selects a key of a Secret. Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) Fields \u00b6 Field Name Field Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined ManagedFieldsEntry \u00b6 ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to. Fields \u00b6 Field Name Field Type Description apiVersion string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type. manager string Manager is an identifier of the workflow managing these fields. operation string Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'. subresource string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply' OwnerReference \u00b6 OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Fields \u00b6 Field Name Field Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids NodeAffinity \u00b6 Node affinity is a group of node affinity scheduling rules. Fields \u00b6 Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< PreferredSchedulingTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution NodeSelector If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. PodAffinity \u00b6 Pod affinity is a group of inter pod affinity scheduling rules. Fields \u00b6 Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. PodAntiAffinity \u00b6 Pod anti affinity is a group of inter pod anti affinity scheduling rules. Fields \u00b6 Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. PodDNSConfigOption \u00b6 PodDNSConfigOption defines DNS resolver options of a pod. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) Fields \u00b6 Field Name Field Type Description name string Required. value string No description available SELinuxOptions \u00b6 SELinuxOptions are the labels to be applied to the container Fields \u00b6 Field Name Field Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. SeccompProfile \u00b6 SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Fields \u00b6 Field Name Field Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - \"Localhost\" indicates a profile defined in a file on the node should be used. The file's location relative to /seccomp. - \"RuntimeDefault\" represents the default container runtime seccomp profile. - \"Unconfined\" indicates no seccomp profile is applied (A.K.A. unconfined). Sysctl \u00b6 Sysctl defines a kernel parameter to be set Fields \u00b6 Field Name Field Type Description name string Name of a property to set value string Value of a property to set WindowsSecurityContextOptions \u00b6 WindowsSecurityContextOptions contain Windows-specific options and credentials. Fields \u00b6 Field Name Field Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. PersistentVolumeClaimSpec \u00b6 PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml) Fields \u00b6 Field Name Field Type Description accessModes Array< string > AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource TypedLocalObjectReference This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef TypedLocalObjectReference Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources ResourceRequirements Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector LabelSelector A label query over volumes to consider for binding. storageClassName string Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string VolumeName is the binding reference to the PersistentVolume backing this claim. PersistentVolumeClaimStatus \u00b6 PersistentVolumeClaimStatus is the current status of a persistent volume claim. Fields \u00b6 Field Name Field Type Description accessModes Array< string > AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources Quantity The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity Quantity Represents the actual resources of the underlying volume. conditions Array< PersistentVolumeClaimCondition > Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. phase string Phase represents the current phase of PersistentVolumeClaim. Possible enum values: - \"Bound\" used for PersistentVolumeClaims that are bound - \"Lost\" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - \"Pending\" used for PersistentVolumeClaims that are not yet bound resizeStatus string ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. AWSElasticBlockStoreVolumeSource \u00b6 Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). readOnly boolean Specify \"true\" to force and set the ReadOnly property in VolumeMounts to \"true\". If omitted, the default is \"false\". More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string Unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore AzureDiskVolumeSource \u00b6 AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Fields \u00b6 Field Name Field Type Description cachingMode string Host Caching mode: None, Read Only, Read Write. diskName string The Name of the data disk in the blob storage diskURI string The URI the data disk in the blob storage fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. kind string Expected values Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. AzureFileVolumeSource \u00b6 AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Fields \u00b6 Field Name Field Type Description readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string the name of secret that contains Azure Storage Account Name and Key shareName string Share Name CephFSVolumeSource \u00b6 Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description monitors Array< string > Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef LocalObjectReference Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it CinderVolumeSource \u00b6 Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef LocalObjectReference Optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volume id used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md ConfigMapVolumeSource \u00b6 Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined CSIVolumeSource \u00b6 Represents a source location of a volume to mount, managed by an external CSI driver Fields \u00b6 Field Name Field Type Description driver string Driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string Filesystem type to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference NodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean Specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes Map< string , string > VolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. DownwardAPIVolumeSource \u00b6 DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< DownwardAPIVolumeFile > Items is a list of downward API volume file EmptyDirVolumeSource \u00b6 Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) Fields \u00b6 Field Name Field Type Description medium string What type of storage medium should back this directory. The default is \"\" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir EphemeralVolumeSource \u00b6 Represents an ephemeral volume that is handled by a normal storage driver. Fields \u00b6 Field Name Field Type Description volumeClaimTemplate PersistentVolumeClaimTemplate Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be - where is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. FCVolumeSource \u00b6 Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. lun integer Optional: FC target lun number readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs Array< string > Optional: FC target worldwide names (WWNs) wwids Array< string > Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. FlexVolumeSource \u00b6 FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Fields \u00b6 Field Name Field Type Description driver string Driver is the name of the driver to use for this volume. fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. options Map< string , string > Optional: Extra command options if any. readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. FlockerVolumeSource \u00b6 Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description datasetName string Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated datasetUUID string UUID of the dataset. This is unique identifier of a Flocker dataset GCEPersistentDiskVolumeSource \u00b6 Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string Unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk GitRepoVolumeSource \u00b6 Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Fields \u00b6 Field Name Field Type Description directory string Target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string Repository URL revision string Commit hash for the specified revision. GlusterfsVolumeSource \u00b6 Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description endpoints string EndpointsName is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string Path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean ReadOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod HostPathVolumeSource \u00b6 Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description path string Path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string Type for HostPath Volume Defaults to \"\" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath ISCSIVolumeSource \u00b6 Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description chapAuthDiscovery boolean whether support iSCSI Discovery CHAP authentication chapAuthSession boolean whether support iSCSI Session CHAP authentication fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string Custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. iqn string Target iSCSI Qualified Name. iscsiInterface string iSCSI Interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer iSCSI Target Lun number. portals Array< string > iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef LocalObjectReference CHAP Secret for iSCSI target and initiator authentication targetPortal string iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). NFSVolumeSource \u00b6 Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description path string Path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean ReadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string Server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs PersistentVolumeClaimVolumeSource \u00b6 PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Examples with this field (click to open) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) Fields \u00b6 Field Name Field Type Description claimName string ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean Will force the ReadOnly setting in VolumeMounts. Default false. PhotonPersistentDiskVolumeSource \u00b6 Represents a Photon Controller persistent disk resource. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string ID that identifies Photon Controller persistent disk PortworxVolumeSource \u00b6 PortworxVolumeSource represents a Portworx volume resource. Fields \u00b6 Field Name Field Type Description fsType string FSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string VolumeID uniquely identifies a Portworx volume ProjectedVolumeSource \u00b6 Represents a projected volume source Fields \u00b6 Field Name Field Type Description defaultMode integer Mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources Array< VolumeProjection > list of volume projections QuobyteVolumeSource \u00b6 Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Fields \u00b6 Field Name Field Type Description group string Group to map volume access to Default is no group readOnly boolean ReadOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string Registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string Tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string User to map volume access to Defaults to serivceaccount user volume string Volume is a string that references an already created Quobyte volume by name. RBDVolumeSource \u00b6 Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string The rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string Keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors Array< string > A collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string The rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef LocalObjectReference SecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string The rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it ScaleIOVolumeSource \u00b6 ScaleIOVolumeSource represents a persistent ScaleIO volume Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". gateway string The host address of the ScaleIO API Gateway. protectionDomain string The name of the ScaleIO Protection Domain for the configured storage. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean Flag to enable/disable SSL communication with Gateway, default false storageMode string Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string The ScaleIO Storage Pool associated with the protection domain. system string The name of the storage system as configured in ScaleIO. volumeName string The name of a volume already created in the ScaleIO system that is associated with this volume source. SecretVolumeSource \u00b6 Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) Fields \u00b6 Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean Specify whether the Secret or its keys must be defined secretName string Name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret StorageOSVolumeSource \u00b6 Represents a StorageOS persistent volume resource. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string VolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string VolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. VsphereVirtualDiskVolumeSource \u00b6 Represents a vSphere volume resource. Fields \u00b6 Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. storagePolicyID string Storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string Storage Policy Based Management (SPBM) profile name. volumePath string Path that identifies vSphere volume vmdk LabelSelectorRequirement \u00b6 A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Fields \u00b6 Field Name Field Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values Array< string > values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. EnvVarSource \u00b6 EnvVarSource represents a source for the value of an EnvVar. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) Fields \u00b6 Field Name Field Type Description configMapKeyRef ConfigMapKeySelector Selects a key of a ConfigMap. fieldRef ObjectFieldSelector Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels[''] , metadata.annotations[''] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef SecretKeySelector Selects a key of a secret in the pod's namespace ConfigMapEnvSource \u00b6 ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Fields \u00b6 Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined SecretEnvSource \u00b6 SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Fields \u00b6 Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined LifecycleHandler \u00b6 LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Fields \u00b6 Field Name Field Type Description exec ExecAction Exec specifies the action to take. httpGet HTTPGetAction HTTPGet specifies the http request to perform. tcpSocket TCPSocketAction Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. ExecAction \u00b6 ExecAction describes a \"run in container\" action. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) Fields \u00b6 Field Name Field Type Description command Array< string > Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions (' GRPCAction \u00b6 No description available Fields \u00b6 Field Name Field Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC. HTTPGetAction \u00b6 HTTPGetAction describes an action based on HTTP Get requests. Examples with this field (click to open) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) Fields \u00b6 Field Name Field Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. httpHeaders Array< HTTPHeader > Custom headers to set in the request. HTTP allows repeated headers. path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - \"HTTP\" means that the scheme used will be http:// - \"HTTPS\" means that the scheme used will be https:// TCPSocketAction \u00b6 TCPSocketAction describes an action based on opening a socket Fields \u00b6 Field Name Field Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. Quantity \u00b6 Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) Capabilities \u00b6 Adds and removes POSIX capabilities from running containers. Fields \u00b6 Field Name Field Type Description add Array< string > Added capabilities drop Array< string > Removed capabilities FieldsV1 \u00b6 FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format. Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff PreferredSchedulingTerm \u00b6 An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Fields \u00b6 Field Name Field Type Description preference NodeSelectorTerm A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. NodeSelector \u00b6 A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Fields \u00b6 Field Name Field Type Description nodeSelectorTerms Array< NodeSelectorTerm > Required. A list of node selector terms. The terms are ORed. WeightedPodAffinityTerm \u00b6 The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Fields \u00b6 Field Name Field Type Description podAffinityTerm PodAffinityTerm Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. PodAffinityTerm \u00b6 Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running Fields \u00b6 Field Name Field Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces. This field is beta-level and is only honored when PodAffinityNamespaceSelector feature is enabled. namespaces Array< string > namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\" topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. TypedLocalObjectReference \u00b6 TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Fields \u00b6 Field Name Field Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced PersistentVolumeClaimCondition \u00b6 PersistentVolumeClaimCondition contails details about state of pvc Fields \u00b6 Field Name Field Type Description lastProbeTime Time Last time we probed the condition. lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports \"ResizeStarted\" that means the underlying persistent volume is being resized. status string No description available type string Possible enum values: - \"FileSystemResizePending\" - controller resize is finished and a file system resize is pending on node - \"Resizing\" - a user trigger resize of pvc has been started KeyToPath \u00b6 Maps a string key to a path within a volume. Fields \u00b6 Field Name Field Type Description key string The key to project. mode integer Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string The relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. DownwardAPIVolumeFile \u00b6 DownwardAPIVolumeFile represents information to create the file containing the pod field Fields \u00b6 Field Name Field Type Description fieldRef ObjectFieldSelector Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. PersistentVolumeClaimTemplate \u00b6 PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Fields \u00b6 Field Name Field Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec PersistentVolumeClaimSpec The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. VolumeProjection \u00b6 Projection that may be projected along with other supported volume types Fields \u00b6 Field Name Field Type Description configMap ConfigMapProjection information about the configMap data to project downwardAPI DownwardAPIProjection information about the downwardAPI data to project secret SecretProjection information about the secret data to project serviceAccountToken ServiceAccountTokenProjection information about the serviceAccountToken data to project ObjectFieldSelector \u00b6 ObjectFieldSelector selects an APIVersioned field of an object. Fields \u00b6 Field Name Field Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". fieldPath string Path of the field to select in the specified API version. ResourceFieldSelector \u00b6 ResourceFieldSelector represents container resources (cpu, memory) and their output format Fields \u00b6 Field Name Field Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to \"1\" resource string Required: resource to select HTTPHeader \u00b6 HTTPHeader describes a custom header to be used in HTTP probes Fields \u00b6 Field Name Field Type Description name string The header field name value string The header field value NodeSelectorTerm \u00b6 A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Fields \u00b6 Field Name Field Type Description matchExpressions Array< NodeSelectorRequirement > A list of node selector requirements by node's labels. matchFields Array< NodeSelectorRequirement > A list of node selector requirements by node's fields. ConfigMapProjection \u00b6 Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) Fields \u00b6 Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined DownwardAPIProjection \u00b6 Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Fields \u00b6 Field Name Field Type Description items Array< DownwardAPIVolumeFile > Items is a list of DownwardAPIVolume file SecretProjection \u00b6 Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) Fields \u00b6 Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined ServiceAccountTokenProjection \u00b6 ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Fields \u00b6 Field Name Field Type Description audience string Audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string Path is the path relative to the mount point of the file to project the token into. NodeSelectorRequirement \u00b6 A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Fields \u00b6 Field Name Field Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - \"DoesNotExist\" - \"Exists\" - \"Gt\" - \"In\" - \"Lt\" - \"NotIn\" values Array< string > An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.","title":"Field Reference"},{"location":"fields/#field-reference","text":"","title":"Field Reference"},{"location":"fields/#workflow","text":"Workflow is the definition of a workflow resource Examples (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`daemoned-stateful-set-with-service.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemoned-stateful-set-with-service.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-jobs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-jobs.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-orchestration.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-orchestration.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch-basic.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch-basic.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-resource-log-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-resource-log-selector.yaml) - [`k8s-set-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-set-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resource-delete-with-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-delete-with-flags.yaml) - [`resource-flags.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resource-flags.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"Workflow"},{"location":"fields/#fields","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available status WorkflowStatus No description available","title":"Fields"},{"location":"fields/#cronworkflow","text":"CronWorkflow is the definition of a scheduled workflow resource Examples (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml)","title":"CronWorkflow"},{"location":"fields/#fields_1","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec CronWorkflowSpec No description available status CronWorkflowStatus No description available","title":"Fields"},{"location":"fields/#workflowtemplate","text":"WorkflowTemplate is the definition of a workflow template resource Examples (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"WorkflowTemplate"},{"location":"fields/#fields_2","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta No description available spec WorkflowSpec No description available","title":"Fields"},{"location":"fields/#workflowspec","text":"WorkflowSpec is the specification of a Workflow. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"WorkflowSpec"},{"location":"fields/#fields_3","text":"Field Name Field Type Description activeDeadlineSeconds integer Optional duration in seconds relative to the workflow start time which the workflow is allowed to run before the controller terminates the io.argoproj.workflow.v1alpha1. A value of zero is used to terminate a Running workflow affinity Affinity Affinity sets the scheduling constraints for all pods in the io.argoproj.workflow.v1alpha1. Can be overridden by an affinity specified in the template archiveLogs boolean ArchiveLogs indicates if the container logs should be archived arguments Arguments Arguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{io.argoproj.workflow.v1alpha1.parameters.myparam}} artifactGC WorkflowLevelArtifactGC ArtifactGC describes the strategy to use when deleting artifacts from completed or deleted workflows (applies to all output Artifacts unless Artifact.ArtifactGC is specified, which overrides this) artifactRepositoryRef ArtifactRepositoryRef ArtifactRepositoryRef specifies the configMap name and key containing the artifact repository config. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. dnsConfig PodDNSConfig PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. entrypoint string Entrypoint is a template reference to the starting point of the io.argoproj.workflow.v1alpha1. executor ExecutorConfig Executor holds configurations of executor containers of the io.argoproj.workflow.v1alpha1. hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step hostAliases Array< HostAlias > No description available hostNetwork boolean Host networking requested for this workflow pod. Default to false. imagePullSecrets Array< LocalObjectReference > ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod metrics Metrics Metrics are a list of metrics emitted from this Workflow nodeSelector Map< string , string > NodeSelector is a selector which will result in all pods of the workflow to be scheduled on the selected node(s). This is able to be overridden by a nodeSelector specified in the template. onExit string OnExit is a template reference which is invoked at the end of the workflow, irrespective of the success, failure, or error of the primary io.argoproj.workflow.v1alpha1. parallelism integer Parallelism limits the max total parallel pods that can execute at the same time in a workflow podDisruptionBudget PodDisruptionBudgetSpec PodDisruptionBudget holds the number of concurrent disruptions that you allow for Workflow's Pods. Controller will automatically add the selector with workflow name, if selector is empty. Optional: Defaults to empty. podGC PodGC PodGC describes the strategy to use when deleting completed pods podMetadata Metadata PodMetadata defines additional metadata that should be applied to workflow pods ~~ podPriority ~~ ~~ integer ~~ ~~Priority to apply to workflow pods.~~ DEPRECATED: Use PodPriorityClassName instead. podPriorityClassName string PriorityClassName to apply to workflow pods. podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first. retryStrategy RetryStrategy RetryStrategy for all templates in the io.argoproj.workflow.v1alpha1. schedulerName string Set scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified. securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to run all pods of the workflow as. shutdown string Shutdown will shutdown the workflow according to its ShutdownStrategy suspend boolean Suspend will suspend the workflow and prevent execution of any future steps in the workflow synchronization Synchronization Synchronization holds synchronization lock configuration for this Workflow templateDefaults Template TemplateDefaults holds default template values that will apply to all templates in the Workflow, unless overridden on the template-level templates Array< Template > Templates is a list of workflow templates used in a workflow tolerations Array< Toleration > Tolerations to apply to workflow pods. ttlStrategy TTLStrategy TTLStrategy limits the lifetime of a Workflow that has finished execution depending on if it Succeeded or Failed. If this struct is set, once the Workflow finishes, it will be deleted after the time to live expires. If this field is unset, the controller config map will hold the default values. volumeClaimGC VolumeClaimGC VolumeClaimGC describes the strategy to use when deleting volumes from completed workflows volumeClaimTemplates Array< PersistentVolumeClaim > VolumeClaimTemplates is a list of claims that containers are allowed to reference. The Workflow controller will create the claims at the beginning of the workflow and delete the claims upon completion of the workflow volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a io.argoproj.workflow.v1alpha1. workflowMetadata WorkflowMetadata WorkflowMetadata contains some metadata of the workflow to refer to workflowTemplateRef WorkflowTemplateRef WorkflowTemplateRef holds a reference to a WorkflowTemplate for execution","title":"Fields"},{"location":"fields/#workflowstatus","text":"WorkflowStatus contains overall status information about a workflow","title":"WorkflowStatus"},{"location":"fields/#fields_4","text":"Field Name Field Type Description artifactGCStatus ArtGCStatus ArtifactGCStatus maintains the status of Artifact Garbage Collection artifactRepositoryRef ArtifactRepositoryRefStatus ArtifactRepositoryRef is used to cache the repository to use so we do not need to determine it everytime we reconcile. compressedNodes string Compressed and base64 decoded Nodes map conditions Array< Condition > Conditions is a list of conditions the Workflow may have estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this workflow completed message string A human readable message indicating details about why the workflow is in this condition. nodes NodeStatus Nodes is a mapping between a node ID and the node's status. offloadNodeStatusVersion string Whether on not node status has been offloaded to a database. If exists, then Nodes and CompressedNodes will be empty. This will actually be populated with a hash of the offloaded data. outputs Outputs Outputs captures output values and artifact locations produced by the workflow via global outputs persistentVolumeClaims Array< Volume > PersistentVolumeClaims tracks all PVCs that were created as part of the io.argoproj.workflow.v1alpha1. The contents of this list are drained at the end of the workflow. phase string Phase a simple, high-level summary of where the workflow is in its lifecycle. Will be \"\" (Unknown), \"Pending\", or \"Running\" before the workflow is completed, and \"Succeeded\", \"Failed\" or \"Error\" once the workflow has completed. progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is the total for the workflow startedAt Time Time at which this workflow started storedTemplates Template StoredTemplates is a mapping between a template ref and the node's status. storedWorkflowTemplateSpec WorkflowSpec StoredWorkflowSpec stores the WorkflowTemplate spec for future execution. synchronization SynchronizationStatus Synchronization stores the status of synchronization locks taskResultsCompleted Map< boolean , string > Have task results been completed? (mapped by Pod name) used to prevent premature garbage collection of artifacts.","title":"Fields"},{"location":"fields/#cronworkflowspec","text":"CronWorkflowSpec is the specification of a CronWorkflow Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"CronWorkflowSpec"},{"location":"fields/#fields_5","text":"Field Name Field Type Description concurrencyPolicy string ConcurrencyPolicy is the K8s-style concurrency policy that will be used failedJobsHistoryLimit integer FailedJobsHistoryLimit is the number of failed jobs to be kept at a time schedule string Schedule is a schedule to run the Workflow in Cron format startingDeadlineSeconds integer StartingDeadlineSeconds is the K8s-style deadline that will limit the time a CronWorkflow will be run after its original scheduled time if it is missed. successfulJobsHistoryLimit integer SuccessfulJobsHistoryLimit is the number of successful jobs to be kept at a time suspend boolean Suspend is a flag that will stop new CronWorkflows from running if set to true timezone string Timezone is the timezone against which the cron schedule will be calculated, e.g. \"Asia/Tokyo\". Default is machine's local time. workflowMetadata ObjectMeta WorkflowMetadata contains some metadata of the workflow to be run workflowSpec WorkflowSpec WorkflowSpec is the spec of the workflow to be run","title":"Fields"},{"location":"fields/#cronworkflowstatus","text":"CronWorkflowStatus is the status of a CronWorkflow","title":"CronWorkflowStatus"},{"location":"fields/#fields_6","text":"Field Name Field Type Description active Array< ObjectReference > Active is a list of active workflows stemming from this CronWorkflow conditions Array< Condition > Conditions is a list of conditions the CronWorkflow may have lastScheduledTime Time LastScheduleTime is the last time the CronWorkflow was scheduled","title":"Fields"},{"location":"fields/#arguments","text":"Arguments to a template Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml)","title":"Arguments"},{"location":"fields/#fields_7","text":"Field Name Field Type Description artifacts Array< Artifact > Artifacts is the list of artifacts to pass to the template or workflow parameters Array< Parameter > Parameters is the list of parameters to pass to the template or workflow","title":"Fields"},{"location":"fields/#workflowlevelartifactgc","text":"WorkflowLevelArtifactGC describes how to delete artifacts from completed Workflows - this spec is used on the Workflow level Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml)","title":"WorkflowLevelArtifactGC"},{"location":"fields/#fields_8","text":"Field Name Field Type Description forceFinalizerRemoval boolean ForceFinalizerRemoval: if set to true, the finalizer will be removed in the case that Artifact GC fails podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the artgc pod spec. serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use.","title":"Fields"},{"location":"fields/#artifactrepositoryref","text":"No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml)","title":"ArtifactRepositoryRef"},{"location":"fields/#fields_9","text":"Field Name Field Type Description configMap string The name of the config map. Defaults to \"artifact-repositories\". key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation.","title":"Fields"},{"location":"fields/#executorconfig","text":"ExecutorConfig holds configurations of an executor container.","title":"ExecutorConfig"},{"location":"fields/#fields_10","text":"Field Name Field Type Description serviceAccountName string ServiceAccountName specifies the service account name of the executor container.","title":"Fields"},{"location":"fields/#lifecyclehook","text":"No description available Examples with this field (click to open) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml)","title":"LifecycleHook"},{"location":"fields/#fields_11","text":"Field Name Field Type Description arguments Arguments Arguments hold arguments to the template expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored template string Template is the name of the template to execute by the hook templateRef TemplateRef TemplateRef is the reference to the template resource to execute by the hook","title":"Fields"},{"location":"fields/#metrics","text":"Metrics are a list of metrics emitted from a Workflow/Template Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Metrics"},{"location":"fields/#fields_12","text":"Field Name Field Type Description prometheus Array< Prometheus > Prometheus is a list of prometheus metrics to be emitted","title":"Fields"},{"location":"fields/#podgc","text":"PodGC describes how to delete completed pods as they complete Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml)","title":"PodGC"},{"location":"fields/#fields_13","text":"Field Name Field Type Description deleteDelayDuration Duration DeleteDelayDuration specifies the duration before pods in the GC queue get deleted. labelSelector LabelSelector LabelSelector is the label selector to check if the pods match the labels before being added to the pod GC queue. strategy string Strategy is the strategy to use. One of \"OnPodCompletion\", \"OnPodSuccess\", \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". If unset, does not delete Pods","title":"Fields"},{"location":"fields/#metadata","text":"Pod metdata Examples with this field (click to open) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml)","title":"Metadata"},{"location":"fields/#fields_14","text":"Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available","title":"Fields"},{"location":"fields/#retrystrategy","text":"RetryStrategy provides controls on how to retry a workflow step Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"RetryStrategy"},{"location":"fields/#fields_15","text":"Field Name Field Type Description affinity RetryAffinity Affinity prevents running workflow's step on the same host backoff Backoff Backoff is a backoff strategy expression string Expression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored limit IntOrString Limit is the maximum number of retry attempts when retrying a container. It does not include the original container; the maximum number of total attempts will be limit + 1 . retryPolicy string RetryPolicy is a policy of NodePhase statuses that will be retried","title":"Fields"},{"location":"fields/#synchronization","text":"Synchronization holds synchronization lock configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"Synchronization"},{"location":"fields/#fields_16","text":"Field Name Field Type Description mutex Mutex Mutex holds the Mutex lock details semaphore SemaphoreRef Semaphore holds the Semaphore configuration","title":"Fields"},{"location":"fields/#template","text":"Template is a reusable and composable unit of execution in a workflow Examples with this field (click to open) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml)","title":"Template"},{"location":"fields/#fields_17","text":"Field Name Field Type Description activeDeadlineSeconds IntOrString Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates. affinity Affinity Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any) archiveLocation ArtifactLocation Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false. container Container Container is the main container image to run in the pod containerSet ContainerSetTemplate ContainerSet groups multiple containers within a single pod. daemon boolean Daemon will allow a workflow to proceed to the next step so long as the container reaches readiness dag DAGTemplate DAG template subtype which runs a DAG data Data Data is a data template executor ExecutorConfig Executor holds configurations of the executor container. failFast boolean FailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems , etc. hostAliases Array< HostAlias > HostAliases is an optional list of hosts and IPs that will be injected into the pod spec http HTTP HTTP makes a HTTP request initContainers Array< UserContainer > InitContainers is a list of containers which run before the main container. inputs Inputs Inputs describe what inputs parameters and artifacts are supplied to this template memoize Memoize Memoize allows templates to use outputs generated from already executed templates metadata Metadata Metdata sets the pods's metadata, i.e. annotations and labels metrics Metrics Metrics are a list of metrics emitted from this template name string Name is the name of the template nodeSelector Map< string , string > NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level. outputs Outputs Outputs describe the parameters and artifacts that this template produces parallelism integer Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total. plugin Plugin Plugin is a plugin template podSpecPatch string PodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits). priority integer Priority to apply to workflow pods. priorityClassName string PriorityClassName to apply to workflow pods. resource ResourceTemplate Resource template subtype which can run k8s resources retryStrategy RetryStrategy RetryStrategy describes how to retry a template when it fails schedulerName string If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler. script ScriptTemplate Script runs a portion of code against an interpreter securityContext PodSecurityContext SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccountName string ServiceAccountName to apply to workflow pods sidecars Array< UserContainer > Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes steps Array> Steps define a series of sequential/parallel workflow steps suspend SuspendTemplate Suspend template subtype which can suspend a workflow when reaching the step synchronization Synchronization Synchronization holds synchronization lock configuration for this template timeout string Timeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates. tolerations Array< Toleration > Tolerations to apply to workflow pods. volumes Array< Volume > Volumes is a list of volumes that can be mounted by containers in a template.","title":"Fields"},{"location":"fields/#ttlstrategy","text":"TTLStrategy is the strategy for the time to live depending on if the workflow succeeded or failed Examples with this field (click to open) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml)","title":"TTLStrategy"},{"location":"fields/#fields_18","text":"Field Name Field Type Description secondsAfterCompletion integer SecondsAfterCompletion is the number of seconds to live after completion secondsAfterFailure integer SecondsAfterFailure is the number of seconds to live after failure secondsAfterSuccess integer SecondsAfterSuccess is the number of seconds to live after success","title":"Fields"},{"location":"fields/#volumeclaimgc","text":"VolumeClaimGC describes how to delete volumes from completed Workflows","title":"VolumeClaimGC"},{"location":"fields/#fields_19","text":"Field Name Field Type Description strategy string Strategy is the strategy to use. One of \"OnWorkflowCompletion\", \"OnWorkflowSuccess\". Defaults to \"OnWorkflowSuccess\"","title":"Fields"},{"location":"fields/#workflowmetadata","text":"No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml)","title":"WorkflowMetadata"},{"location":"fields/#fields_20","text":"Field Name Field Type Description annotations Map< string , string > No description available labels Map< string , string > No description available labelsFrom LabelValueFrom No description available","title":"Fields"},{"location":"fields/#workflowtemplateref","text":"WorkflowTemplateRef is a reference to a WorkflowTemplate resource. Examples with this field (click to open) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"WorkflowTemplateRef"},{"location":"fields/#fields_21","text":"Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the workflow template.","title":"Fields"},{"location":"fields/#artgcstatus","text":"ArtGCStatus maintains state related to ArtifactGC","title":"ArtGCStatus"},{"location":"fields/#fields_22","text":"Field Name Field Type Description notSpecified boolean if this is true, we already checked to see if we need to do it and we don't podsRecouped Map< boolean , string > have completed Pods been processed? (mapped by Pod name) used to prevent re-processing the Status of a Pod more than once strategiesProcessed Map< boolean , string > have Pods been started to perform this strategy? (enables us not to re-process what we've already done)","title":"Fields"},{"location":"fields/#artifactrepositoryrefstatus","text":"No description available Examples with this field (click to open) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml)","title":"ArtifactRepositoryRefStatus"},{"location":"fields/#fields_23","text":"Field Name Field Type Description artifactRepository ArtifactRepository The repository the workflow will use. This maybe empty before v3.1. configMap string The name of the config map. Defaults to \"artifact-repositories\". default boolean If this ref represents the default artifact repository, rather than a config map. key string The config map key. Defaults to the value of the \"workflows.argoproj.io/default-artifact-repository\" annotation. namespace string The namespace of the config map. Defaults to the workflow's namespace, or the controller's namespace (if found).","title":"Fields"},{"location":"fields/#condition","text":"No description available","title":"Condition"},{"location":"fields/#fields_24","text":"Field Name Field Type Description message string Message is the condition message status string Status is the status of the condition type string Type is the type of condition","title":"Fields"},{"location":"fields/#nodestatus","text":"NodeStatus contains status information about an individual node in the workflow","title":"NodeStatus"},{"location":"fields/#fields_25","text":"Field Name Field Type Description boundaryID string BoundaryID indicates the node ID of the associated template root node in which this node belongs to children Array< string > Children is a list of child node IDs daemoned boolean Daemoned tracks whether or not this node was daemoned and need to be terminated displayName string DisplayName is a human readable representation of the node. Unique within a template boundary estimatedDuration integer EstimatedDuration in seconds. finishedAt Time Time at which this node completed hostNodeName string HostNodeName name of the Kubernetes node on which the Pod is running, if applicable id string ID is a unique identifier of a node within the worklow It is implemented as a hash of the node name, which makes the ID deterministic inputs Inputs Inputs captures input parameter values and artifact locations supplied to this template invocation memoizationStatus MemoizationStatus MemoizationStatus holds information about cached nodes message string A human readable message indicating details about why the node is in this condition. name string Name is unique name in the node tree used to generate the node ID nodeFlag NodeFlag NodeFlag tracks some history of node. e.g.) hooked, retried, etc. outboundNodes Array< string > OutboundNodes tracks the node IDs which are considered \"outbound\" nodes to a template invocation. For every invocation of a template, there are nodes which we considered as \"outbound\". Essentially, these are last nodes in the execution sequence to run, before the template is considered completed. These nodes are then connected as parents to a following step. In the case of single pod steps (i.e. container, script, resource templates), this list will be nil since the pod itself is already considered the \"outbound\" node. In the case of DAGs, outbound nodes are the \"target\" tasks (tasks with no children). In the case of steps, outbound nodes are all the containers involved in the last step group. NOTE: since templates are composable, the list of outbound nodes are carried upwards when a DAG/steps template invokes another DAG/steps template. In other words, the outbound nodes of a template, will be a superset of the outbound nodes of its last children. outputs Outputs Outputs captures output parameter values and artifact locations produced by this template invocation phase string Phase a simple, high-level summary of where the node is in its lifecycle. Can be used as a state machine. Will be one of these values \"Pending\", \"Running\" before the node is completed, or \"Succeeded\", \"Skipped\", \"Failed\", \"Error\", or \"Omitted\" as a final state. podIP string PodIP captures the IP of the pod for daemoned steps progress string Progress to completion resourcesDuration Map< integer , int64 > ResourcesDuration is indicative, but not accurate, resource duration. This is populated when the nodes completes. startedAt Time Time at which this node started synchronizationStatus NodeSynchronizationStatus SynchronizationStatus is the synchronization status of the node templateName string TemplateName is the template name which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateRef TemplateRef TemplateRef is the reference to the template resource which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup) templateScope string TemplateScope is the template scope in which the template of this node was retrieved. type string Type indicates type of node","title":"Fields"},{"location":"fields/#outputs","text":"Outputs hold parameters, artifacts, and results from a step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"Outputs"},{"location":"fields/#fields_26","text":"Field Name Field Type Description artifacts Array< Artifact > Artifacts holds the list of output artifacts produced by a step exitCode string ExitCode holds the exit code of a script template parameters Array< Parameter > Parameters holds the list of output parameters produced by a step result string Result holds the result (stdout) of a script template","title":"Fields"},{"location":"fields/#synchronizationstatus","text":"SynchronizationStatus stores the status of semaphore and mutex. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"SynchronizationStatus"},{"location":"fields/#fields_27","text":"Field Name Field Type Description mutex MutexStatus Mutex stores this workflow's mutex holder details semaphore SemaphoreStatus Semaphore stores this workflow's Semaphore holder details","title":"Fields"},{"location":"fields/#artifact","text":"Artifact indicates an artifact to place at a specified path Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"Artifact"},{"location":"fields/#fields_28","text":"Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source","title":"Fields"},{"location":"fields/#parameter","text":"Parameter indicate a passed string parameter to a service template with an optional default value Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml)","title":"Parameter"},{"location":"fields/#fields_29","text":"Field Name Field Type Description default string Default is the default value to use for an input parameter if a value was not supplied description string Description is the parameter description enum Array< string > Enum holds a list of string values to choose from, for the actual value of the parameter globalName string GlobalName exports an output parameter to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters name string Name is the parameter name value string Value is the literal value to use for the parameter. If specified in the context of an input parameter, the value takes precedence over any passed values valueFrom ValueFrom ValueFrom is the source for the output parameter's value","title":"Fields"},{"location":"fields/#templateref","text":"TemplateRef is a reference of template resource. Examples with this field (click to open) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"TemplateRef"},{"location":"fields/#fields_30","text":"Field Name Field Type Description clusterScope boolean ClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate). name string Name is the resource name of the template. template string Template is the name of referred template in the resource.","title":"Fields"},{"location":"fields/#prometheus","text":"Prometheus is a prometheus metric to be emitted Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Prometheus"},{"location":"fields/#fields_31","text":"Field Name Field Type Description counter Counter Counter is a counter metric gauge Gauge Gauge is a gauge metric help string Help is a string that describes the metric histogram Histogram Histogram is a histogram metric labels Array< MetricLabel > Labels is a list of metric labels name string Name is the name of the metric when string When is a conditional statement that decides when to emit the metric","title":"Fields"},{"location":"fields/#retryaffinity","text":"RetryAffinity prevents running steps on the same host.","title":"RetryAffinity"},{"location":"fields/#fields_32","text":"Field Name Field Type Description nodeAntiAffinity RetryNodeAntiAffinity No description available","title":"Fields"},{"location":"fields/#backoff","text":"Backoff is a backoff strategy to use within retryStrategy Examples with this field (click to open) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml)","title":"Backoff"},{"location":"fields/#fields_33","text":"Field Name Field Type Description duration string Duration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. \"2m\", \"1h\") factor IntOrString Factor is a factor to multiply the base duration after each failed retry maxDuration string MaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy","title":"Fields"},{"location":"fields/#mutex","text":"Mutex holds Mutex configuration Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"Mutex"},{"location":"fields/#fields_34","text":"Field Name Field Type Description name string name of the mutex namespace string Namespace is the namespace of the mutex, default: [namespace of workflow]","title":"Fields"},{"location":"fields/#semaphoreref","text":"SemaphoreRef is a reference of Semaphore","title":"SemaphoreRef"},{"location":"fields/#fields_35","text":"Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for Semaphore configuration namespace string Namespace is the namespace of the configmap, default: [namespace of workflow]","title":"Fields"},{"location":"fields/#artifactlocation","text":"ArtifactLocation describes a location for a single or multiple artifacts. It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml)","title":"ArtifactLocation"},{"location":"fields/#fields_36","text":"Field Name Field Type Description archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details oss OSSArtifact OSS contains OSS artifact location details raw RawArtifact Raw contains raw artifact location details s3 S3Artifact S3 contains S3 artifact location details","title":"Fields"},{"location":"fields/#containersettemplate","text":"No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml)","title":"ContainerSetTemplate"},{"location":"fields/#fields_37","text":"Field Name Field Type Description containers Array< ContainerNode > No description available retryStrategy ContainerSetRetryStrategy RetryStrategy describes how to retry a container nodes in the container set if it fails. Nbr of retries(default 0) and sleep duration between retries(default 0s, instant retry) can be set. volumeMounts Array< VolumeMount > No description available","title":"Fields"},{"location":"fields/#dagtemplate","text":"DAGTemplate is a template subtype for directed acyclic graph templates Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"DAGTemplate"},{"location":"fields/#fields_38","text":"Field Name Field Type Description failFast boolean This flag is for DAG logic. The DAG logic has a built-in \"fail fast\" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442 target string Target are one or more names of targets to execute in a DAG tasks Array< DAGTask > Tasks are a list of DAG tasks","title":"Fields"},{"location":"fields/#data","text":"Data is a data template Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml)","title":"Data"},{"location":"fields/#fields_39","text":"Field Name Field Type Description source DataSource Source sources external data into a data template transformation Array< TransformationStep > Transformation applies a set of transformations","title":"Fields"},{"location":"fields/#http","text":"No description available Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTP"},{"location":"fields/#fields_40","text":"Field Name Field Type Description body string Body is content of the HTTP Request bodyFrom HTTPBodySource BodyFrom is content of the HTTP Request as Bytes headers Array< HTTPHeader > Headers are an optional list of headers to send with HTTP requests insecureSkipVerify boolean InsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client method string Method is HTTP methods for HTTP Request successCondition string SuccessCondition is an expression if evaluated to true is considered successful timeoutSeconds integer TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds url string URL of the HTTP Request","title":"Fields"},{"location":"fields/#usercontainer","text":"UserContainer is a container specified by a user. Examples with this field (click to open) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml)","title":"UserContainer"},{"location":"fields/#fields_41","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes mirrorVolumeMounts boolean MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#inputs","text":"Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"Inputs"},{"location":"fields/#fields_42","text":"Field Name Field Type Description artifacts Array< Artifact > Artifact are a list of artifacts passed as inputs parameters Array< Parameter > Parameters are a list of parameters passed as inputs","title":"Fields"},{"location":"fields/#memoize","text":"Memoization enables caching for the Outputs of the template Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"Memoize"},{"location":"fields/#fields_43","text":"Field Name Field Type Description cache Cache Cache sets and configures the kind of cache key string Key is the key to use as the caching key maxAge string MaxAge is the maximum age (e.g. \"180s\", \"24h\") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored.","title":"Fields"},{"location":"fields/#plugin","text":"Plugin is an Object with exactly one key","title":"Plugin"},{"location":"fields/#resourcetemplate","text":"ResourceTemplate is a template subtype to manipulate kubernetes resources Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml)","title":"ResourceTemplate"},{"location":"fields/#fields_44","text":"Field Name Field Type Description action string Action is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch failureCondition string FailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed flags Array< string > Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ \"--validate=false\" # disable resource validation ] manifest string Manifest contains the kubernetes manifest manifestFrom ManifestFrom ManifestFrom is the source for a single kubernetes manifest mergeStrategy string MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json setOwnerReference boolean SetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource. successCondition string SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step","title":"Fields"},{"location":"fields/#scripttemplate","text":"ScriptTemplate is a template subtype to enable scripting through code steps Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"ScriptTemplate"},{"location":"fields/#fields_45","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ source string Source contains the source code of the script to execute startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#workflowstep","text":"WorkflowStep is a reference to a template to execute in a series of step Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"WorkflowStep"},{"location":"fields/#fields_46","text":"Field Name Field Type Description arguments Arguments Arguments hold arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified hooks LifecycleHook Hooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name of the step ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Template is the name of the template to execute as the step templateRef TemplateRef TemplateRef is the reference to the template resource to execute as the step. when string When is an expression in which the step should conditionally execute withItems Array< Item > WithItems expands a step into multiple parallel steps from the items in the list withParam string WithParam expands a step into multiple parallel steps from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a step into a numeric sequence","title":"Fields"},{"location":"fields/#suspendtemplate","text":"SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time Examples with this field (click to open) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml)","title":"SuspendTemplate"},{"location":"fields/#fields_47","text":"Field Name Field Type Description duration string Duration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\"","title":"Fields"},{"location":"fields/#labelvaluefrom","text":"No description available Examples with this field (click to open) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml)","title":"LabelValueFrom"},{"location":"fields/#fields_48","text":"Field Name Field Type Description expression string No description available","title":"Fields"},{"location":"fields/#artifactrepository","text":"ArtifactRepository represents an artifact repository in which a controller will store its artifacts","title":"ArtifactRepository"},{"location":"fields/#fields_49","text":"Field Name Field Type Description archiveLogs boolean ArchiveLogs enables log archiving artifactory ArtifactoryArtifactRepository Artifactory stores artifacts to JFrog Artifactory azure AzureArtifactRepository Azure stores artifact in an Azure Storage account gcs GCSArtifactRepository GCS stores artifact in a GCS object store hdfs HDFSArtifactRepository HDFS stores artifacts in HDFS oss OSSArtifactRepository OSS stores artifact in a OSS-compliant object store s3 S3ArtifactRepository S3 stores artifact in a S3-compliant object store","title":"Fields"},{"location":"fields/#memoizationstatus","text":"MemoizationStatus is the status of this memoized node","title":"MemoizationStatus"},{"location":"fields/#fields_50","text":"Field Name Field Type Description cacheName string Cache is the name of the cache that was used hit boolean Hit indicates whether this node was created from a cache entry key string Key is the name of the key used for this node's cache","title":"Fields"},{"location":"fields/#nodeflag","text":"No description available","title":"NodeFlag"},{"location":"fields/#fields_51","text":"Field Name Field Type Description hooked boolean Hooked tracks whether or not this node was triggered by hook or onExit retried boolean Retried tracks whether or not this node was retried by retryStrategy","title":"Fields"},{"location":"fields/#nodesynchronizationstatus","text":"NodeSynchronizationStatus stores the status of a node","title":"NodeSynchronizationStatus"},{"location":"fields/#fields_52","text":"Field Name Field Type Description waiting string Waiting is the name of the lock that this node is waiting for","title":"Fields"},{"location":"fields/#mutexstatus","text":"MutexStatus contains which objects hold mutex locks, and which objects this workflow is waiting on to release locks. Examples with this field (click to open) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml)","title":"MutexStatus"},{"location":"fields/#fields_53","text":"Field Name Field Type Description holding Array< MutexHolding > Holding is a list of mutexes and their respective objects that are held by mutex lock for this io.argoproj.workflow.v1alpha1. waiting Array< MutexHolding > Waiting is a list of mutexes and their respective objects this workflow is waiting for.","title":"Fields"},{"location":"fields/#semaphorestatus","text":"No description available","title":"SemaphoreStatus"},{"location":"fields/#fields_54","text":"Field Name Field Type Description holding Array< SemaphoreHolding > Holding stores the list of resource acquired synchronization lock for workflows. waiting Array< SemaphoreHolding > Waiting indicates the list of current synchronization lock holders.","title":"Fields"},{"location":"fields/#archivestrategy","text":"ArchiveStrategy describes how to archive files/directory when saving artifacts Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml)","title":"ArchiveStrategy"},{"location":"fields/#fields_55","text":"Field Name Field Type Description none NoneStrategy No description available tar TarStrategy No description available zip ZipStrategy No description available","title":"Fields"},{"location":"fields/#artifactgc","text":"ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed Examples with this field (click to open) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml)","title":"ArtifactGC"},{"location":"fields/#fields_56","text":"Field Name Field Type Description podMetadata Metadata PodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion serviceAccountName string ServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion strategy string Strategy is the strategy to use.","title":"Fields"},{"location":"fields/#artifactoryartifact","text":"ArtifactoryArtifact is the location of an artifactory artifact Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml)","title":"ArtifactoryArtifact"},{"location":"fields/#fields_57","text":"Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password url string URL of the artifact usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#azureartifact","text":"AzureArtifact is the location of a an Azure Storage artifact Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml)","title":"AzureArtifact"},{"location":"fields/#fields_58","text":"Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blob string Blob is the blob name (i.e., path) in the container where the artifact resides container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#gcsartifact","text":"GCSArtifact is the location of a GCS artifact Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml)","title":"GCSArtifact"},{"location":"fields/#fields_59","text":"Field Name Field Type Description bucket string Bucket is the name of the bucket key string Key is the path in the bucket where the artifact resides serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key","title":"Fields"},{"location":"fields/#gitartifact","text":"GitArtifact is the location of an git artifact Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml)","title":"GitArtifact"},{"location":"fields/#fields_60","text":"Field Name Field Type Description branch string Branch is the branch to fetch when SingleBranch is enabled depth integer Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip disableSubmodules boolean DisableSubmodules disables submodules during git clone fetch Array< string > Fetch specifies a number of refs that should be fetched before checkout insecureIgnoreHostKey boolean InsecureIgnoreHostKey disables SSH strict host key checking during git clone passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repo string Repo is the git repository revision string Revision is the git commit, tag, branch to checkout singleBranch boolean SingleBranch enables single branch clone, using the branch parameter sshPrivateKeySecret SecretKeySelector SSHPrivateKeySecret is the secret selector to the repository ssh private key usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#hdfsartifact","text":"HDFSArtifact is the location of an HDFS artifact Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml)","title":"HDFSArtifact"},{"location":"fields/#fields_61","text":"Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. path string Path is a file path in HDFS","title":"Fields"},{"location":"fields/#httpartifact","text":"HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container Examples with this field (click to open) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTPArtifact"},{"location":"fields/#fields_62","text":"Field Name Field Type Description auth HTTPAuth Auth contains information for client authentication headers Array< Header > Headers are an optional list of headers to send with HTTP requests for artifacts url string URL of the artifact","title":"Fields"},{"location":"fields/#ossartifact","text":"OSSArtifact is the location of an Alibaba Cloud OSS artifact Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml)","title":"OSSArtifact"},{"location":"fields/#fields_63","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint key string Key is the path in the bucket where the artifact resides lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#rawartifact","text":"RawArtifact allows raw string content to be placed as an artifact in a container Examples with this field (click to open) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml)","title":"RawArtifact"},{"location":"fields/#fields_64","text":"Field Name Field Type Description data string Data is the string contents of the artifact","title":"Fields"},{"location":"fields/#s3artifact","text":"S3Artifact is the location of an S3 artifact","title":"S3Artifact"},{"location":"fields/#fields_65","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS key string Key is the key in the bucket where the artifact resides region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#valuefrom","text":"ValueFrom describes a location in which to obtain the value to a parameter Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"ValueFrom"},{"location":"fields/#fields_66","text":"Field Name Field Type Description configMapKeyRef ConfigMapKeySelector ConfigMapKeyRef is configmap selector for input parameter configuration default string Default specifies a value to be used if retrieving the value from the specified source fails event string Selector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message expression string Expression, if defined, is evaluated to specify the value for the parameter jqFilter string JQFilter expression against the resource object in resource templates jsonPath string JSONPath of a resource to retrieve an output parameter value from in resource templates parameter string Parameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}') path string Path in the container to retrieve an output parameter value from in container templates supplied SuppliedValueFrom Supplied value to be filled in directly, either through the CLI, API, etc.","title":"Fields"},{"location":"fields/#counter","text":"Counter is a Counter prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Counter"},{"location":"fields/#fields_67","text":"Field Name Field Type Description value string Value is the value of the metric","title":"Fields"},{"location":"fields/#gauge","text":"Gauge is a Gauge prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml)","title":"Gauge"},{"location":"fields/#fields_68","text":"Field Name Field Type Description operation string Operation defines the operation to apply with value and the metrics' current value realtime boolean Realtime emits this metric in real time if applicable value string Value is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric","title":"Fields"},{"location":"fields/#histogram","text":"Histogram is a Histogram prometheus metric Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml)","title":"Histogram"},{"location":"fields/#fields_69","text":"Field Name Field Type Description buckets Array< Amount > Buckets is a list of bucket divisors for the histogram value string Value is the value of the metric","title":"Fields"},{"location":"fields/#metriclabel","text":"MetricLabel is a single label for a prometheus metric Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml)","title":"MetricLabel"},{"location":"fields/#fields_70","text":"Field Name Field Type Description key string No description available value string No description available","title":"Fields"},{"location":"fields/#retrynodeantiaffinity","text":"RetryNodeAntiAffinity is a placeholder for future expansion, only empty nodeAntiAffinity is allowed. In order to prevent running steps on the same host, it uses \"kubernetes.io/hostname\".","title":"RetryNodeAntiAffinity"},{"location":"fields/#containernode","text":"No description available Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml)","title":"ContainerNode"},{"location":"fields/#fields_71","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell dependencies Array< string > No description available env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#containersetretrystrategy","text":"No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"ContainerSetRetryStrategy"},{"location":"fields/#fields_72","text":"Field Name Field Type Description duration string Duration is the time between each retry, examples values are \"300ms\", \"1s\" or \"5m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\". retries IntOrString Nbr of retries","title":"Fields"},{"location":"fields/#dagtask","text":"DAGTask represents a node in the graph during DAG execution Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"DAGTask"},{"location":"fields/#fields_73","text":"Field Name Field Type Description arguments Arguments Arguments are the parameter and artifact arguments to the template continueOn ContinueOn ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified dependencies Array< string > Dependencies are name of other targets which this depends on depends string Depends are name of other targets which this depends on hooks LifecycleHook Hooks hold the lifecycle hook which is invoked at lifecycle of task, irrespective of the success, failure, or error status of the primary task inline Template Inline is the template. Template must be empty if this is declared (and vice-versa). name string Name is the name of the target ~~ onExit ~~ ~~ string ~~ ~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead. template string Name of template to execute templateRef TemplateRef TemplateRef is the reference to the template resource to execute. when string When is an expression in which the task should conditionally execute withItems Array< Item > WithItems expands a task into multiple parallel tasks from the items in the list withParam string WithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list. withSequence Sequence WithSequence expands a task into a numeric sequence","title":"Fields"},{"location":"fields/#datasource","text":"DataSource sources external data into a data template Examples with this field (click to open) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"DataSource"},{"location":"fields/#fields_74","text":"Field Name Field Type Description artifactPaths ArtifactPaths ArtifactPaths is a data transformation that collects a list of artifact paths","title":"Fields"},{"location":"fields/#transformationstep","text":"No description available Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml)","title":"TransformationStep"},{"location":"fields/#fields_75","text":"Field Name Field Type Description expression string Expression defines an expr expression to apply","title":"Fields"},{"location":"fields/#httpbodysource","text":"HTTPBodySource contains the source of the HTTP body.","title":"HTTPBodySource"},{"location":"fields/#fields_76","text":"Field Name Field Type Description bytes byte No description available","title":"Fields"},{"location":"fields/#httpheader","text":"No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTPHeader"},{"location":"fields/#fields_77","text":"Field Name Field Type Description name string No description available value string No description available valueFrom HTTPHeaderSource No description available","title":"Fields"},{"location":"fields/#cache","text":"Cache is the configuration for the type of cache to be used Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"Cache"},{"location":"fields/#fields_78","text":"Field Name Field Type Description configMap ConfigMapKeySelector ConfigMap sets a ConfigMap-based cache","title":"Fields"},{"location":"fields/#manifestfrom","text":"No description available","title":"ManifestFrom"},{"location":"fields/#fields_79","text":"Field Name Field Type Description artifact Artifact Artifact contains the artifact to use","title":"Fields"},{"location":"fields/#continueon","text":"ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both. Examples with this field (click to open) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml)","title":"ContinueOn"},{"location":"fields/#fields_80","text":"Field Name Field Type Description error boolean No description available failed boolean No description available","title":"Fields"},{"location":"fields/#item","text":"Item expands a single workflow step into multiple parallel steps The value of Item can be a map, string, bool, or number Examples with this field (click to open) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml)","title":"Item"},{"location":"fields/#sequence","text":"Sequence expands a workflow step into numeric range Examples with this field (click to open) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"Sequence"},{"location":"fields/#fields_81","text":"Field Name Field Type Description count IntOrString Count is number of elements in the sequence (default: 0). Not to be used with end end IntOrString Number at which to end the sequence (default: 0). Not to be used with Count format string Format is a printf format string to format the value in the sequence start IntOrString Number at which to start the sequence (default: 0)","title":"Fields"},{"location":"fields/#artifactoryartifactrepository","text":"ArtifactoryArtifactRepository defines the controller configuration for an artifactory artifact repository Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml)","title":"ArtifactoryArtifactRepository"},{"location":"fields/#fields_82","text":"Field Name Field Type Description keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password repoURL string RepoURL is the url for artifactory repo. usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#azureartifactrepository","text":"AzureArtifactRepository defines the controller configuration for an Azure Blob Storage artifact repository Examples with this field (click to open) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml)","title":"AzureArtifactRepository"},{"location":"fields/#fields_83","text":"Field Name Field Type Description accountKeySecret SecretKeySelector AccountKeySecret is the secret selector to the Azure Blob Storage account access key blobNameFormat string BlobNameFormat is defines the format of how to store blob names. Can reference workflow variables container string Container is the container where resources will be stored endpoint string Endpoint is the service url associated with an account. It is most likely \"https:// .blob.core.windows.net\" useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#gcsartifactrepository","text":"GCSArtifactRepository defines the controller configuration for a GCS artifact repository Examples with this field (click to open) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml)","title":"GCSArtifactRepository"},{"location":"fields/#fields_84","text":"Field Name Field Type Description bucket string Bucket is the name of the bucket keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. serviceAccountKeySecret SecretKeySelector ServiceAccountKeySecret is the secret selector to the bucket's service account key","title":"Fields"},{"location":"fields/#hdfsartifactrepository","text":"HDFSArtifactRepository defines the controller configuration for an HDFS artifact repository Examples with this field (click to open) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml)","title":"HDFSArtifactRepository"},{"location":"fields/#fields_85","text":"Field Name Field Type Description addresses Array< string > Addresses is accessible addresses of HDFS name nodes force boolean Force copies a file forcibly even if it exists hdfsUser string HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used. krbCCacheSecret SecretKeySelector KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos. krbConfigConfigMap ConfigMapKeySelector KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used. krbKeytabSecret SecretKeySelector KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos. krbRealm string KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used. krbServicePrincipalName string KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used. krbUsername string KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used. pathFormat string PathFormat is defines the format of path to store a file. Can reference workflow variables","title":"Fields"},{"location":"fields/#ossartifactrepository","text":"OSSArtifactRepository defines the controller configuration for an OSS artifact repository Examples with this field (click to open) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml)","title":"OSSArtifactRepository"},{"location":"fields/#fields_86","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket createBucketIfNotPresent boolean CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist endpoint string Endpoint is the hostname of the bucket endpoint keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. lifecycleRule OSSLifecycleRule LifecycleRule specifies how to manage bucket's lifecycle secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key securityToken string SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#s3artifactrepository","text":"S3ArtifactRepository defines the controller configuration for an S3 artifact repository","title":"S3ArtifactRepository"},{"location":"fields/#fields_87","text":"Field Name Field Type Description accessKeySecret SecretKeySelector AccessKeySecret is the secret selector to the bucket's access key bucket string Bucket is the name of the bucket caSecret SecretKeySelector CASecret specifies the secret that contains the CA, used to verify the TLS connection createBucketIfNotPresent CreateS3BucketOptions CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is. encryptionOptions S3EncryptionOptions No description available endpoint string Endpoint is the hostname of the bucket endpoint insecure boolean Insecure will connect to the service with TLS keyFormat string KeyFormat defines the format of how to store keys and can reference workflow variables. ~~ keyPrefix ~~ ~~ string ~~ ~~KeyPrefix is prefix used as part of the bucket key in which the controller will store artifacts.~~ DEPRECATED. Use KeyFormat instead region string Region contains the optional bucket region roleARN string RoleARN is the Amazon Resource Name (ARN) of the role to assume. secretKeySecret SecretKeySelector SecretKeySecret is the secret selector to the bucket's secret key useSDKCreds boolean UseSDKCreds tells the driver to figure out credentials based on sdk defaults.","title":"Fields"},{"location":"fields/#mutexholding","text":"MutexHolding describes the mutex and the object which is holding it.","title":"MutexHolding"},{"location":"fields/#fields_88","text":"Field Name Field Type Description holder string Holder is a reference to the object which holds the Mutex. Holding Scenario: 1. Current workflow's NodeID which is holding the lock. e.g: ${NodeID} Waiting Scenario: 1. Current workflow or other workflow NodeID which is holding the lock. e.g: ${WorkflowName}/${NodeID} mutex string Reference for the mutex e.g: ${namespace}/mutex/${mutexName}","title":"Fields"},{"location":"fields/#semaphoreholding","text":"No description available","title":"SemaphoreHolding"},{"location":"fields/#fields_89","text":"Field Name Field Type Description holders Array< string > Holders stores the list of current holder names in the io.argoproj.workflow.v1alpha1. semaphore string Semaphore stores the semaphore name.","title":"Fields"},{"location":"fields/#nonestrategy","text":"NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately. Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml)","title":"NoneStrategy"},{"location":"fields/#tarstrategy","text":"TarStrategy will tar and gzip the file or directory when saving Examples with this field (click to open) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml)","title":"TarStrategy"},{"location":"fields/#fields_90","text":"Field Name Field Type Description compressionLevel integer CompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression.","title":"Fields"},{"location":"fields/#zipstrategy","text":"ZipStrategy will unzip zipped input artifacts","title":"ZipStrategy"},{"location":"fields/#httpauth","text":"No description available Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"HTTPAuth"},{"location":"fields/#fields_91","text":"Field Name Field Type Description basicAuth BasicAuth No description available clientCert ClientCertAuth No description available oauth2 OAuth2Auth No description available","title":"Fields"},{"location":"fields/#header","text":"Header indicate a key-value request header to be used when fetching artifacts over HTTP Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"Header"},{"location":"fields/#fields_92","text":"Field Name Field Type Description name string Name is the header name value string Value is the literal value to use for the header","title":"Fields"},{"location":"fields/#osslifecyclerule","text":"OSSLifecycleRule specifies how to manage bucket's lifecycle","title":"OSSLifecycleRule"},{"location":"fields/#fields_93","text":"Field Name Field Type Description markDeletionAfterDays integer MarkDeletionAfterDays is the number of days before we delete objects in the bucket markInfrequentAccessAfterDays integer MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type","title":"Fields"},{"location":"fields/#creates3bucketoptions","text":"CreateS3BucketOptions options used to determine automatic automatic bucket-creation process","title":"CreateS3BucketOptions"},{"location":"fields/#fields_94","text":"Field Name Field Type Description objectLocking boolean ObjectLocking Enable object locking","title":"Fields"},{"location":"fields/#s3encryptionoptions","text":"S3EncryptionOptions used to determine encryption options during s3 operations","title":"S3EncryptionOptions"},{"location":"fields/#fields_95","text":"Field Name Field Type Description enableEncryption boolean EnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used kmsEncryptionContext string KmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information kmsKeyId string KMSKeyId tells the driver to encrypt the object using the specified KMS Key. serverSideCustomerKeySecret SecretKeySelector ServerSideCustomerKeySecret tells the driver to encrypt the output artifacts using SSE-C with the specified secret.","title":"Fields"},{"location":"fields/#suppliedvaluefrom","text":"SuppliedValueFrom is a placeholder for a value to be filled in directly, either through the CLI, API, etc. Examples with this field (click to open) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"SuppliedValueFrom"},{"location":"fields/#amount","text":"Amount represent a numeric amount. Examples with this field (click to open) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml)","title":"Amount"},{"location":"fields/#artifactpaths","text":"ArtifactPaths expands a step from a collection of artifacts Examples with this field (click to open) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml)","title":"ArtifactPaths"},{"location":"fields/#fields_96","text":"Field Name Field Type Description archive ArchiveStrategy Archive controls how the artifact will be saved to the artifact repository. archiveLogs boolean ArchiveLogs indicates if the container logs should be archived artifactGC ArtifactGC ArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows artifactory ArtifactoryArtifact Artifactory contains artifactory artifact location details azure AzureArtifact Azure contains Azure Storage artifact location details deleted boolean Has this been deleted? from string From allows an artifact to reference an artifact from a previous step fromExpression string FromExpression, if defined, is evaluated to specify the value for the artifact gcs GCSArtifact GCS contains GCS artifact location details git GitArtifact Git contains git artifact location details globalName string GlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts hdfs HDFSArtifact HDFS contains HDFS artifact location details http HTTPArtifact HTTP contains HTTP artifact location details mode integer mode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts. name string name of the artifact. must be unique within a template's inputs/outputs. optional boolean Make Artifacts optional, if Artifacts doesn't generate or exist oss OSSArtifact OSS contains OSS artifact location details path string Path is the container path to the artifact raw RawArtifact Raw contains raw artifact location details recurseMode boolean If mode is set, apply the permission recursively into the artifact if it is a folder s3 S3Artifact S3 contains S3 artifact location details subPath string SubPath allows an artifact to be sourced from a subpath within the specified source","title":"Fields"},{"location":"fields/#httpheadersource","text":"No description available Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"HTTPHeaderSource"},{"location":"fields/#fields_97","text":"Field Name Field Type Description secretKeyRef SecretKeySelector No description available","title":"Fields"},{"location":"fields/#basicauth","text":"BasicAuth describes the secret selectors required for basic authentication","title":"BasicAuth"},{"location":"fields/#fields_98","text":"Field Name Field Type Description passwordSecret SecretKeySelector PasswordSecret is the secret selector to the repository password usernameSecret SecretKeySelector UsernameSecret is the secret selector to the repository username","title":"Fields"},{"location":"fields/#clientcertauth","text":"ClientCertAuth holds necessary information for client authentication via certificates Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"ClientCertAuth"},{"location":"fields/#fields_99","text":"Field Name Field Type Description clientCertSecret SecretKeySelector No description available clientKeySecret SecretKeySelector No description available","title":"Fields"},{"location":"fields/#oauth2auth","text":"OAuth2Auth holds all information for client authentication via OAuth2 tokens","title":"OAuth2Auth"},{"location":"fields/#fields_100","text":"Field Name Field Type Description clientIDSecret SecretKeySelector No description available clientSecretSecret SecretKeySelector No description available endpointParams Array< OAuth2EndpointParam > No description available scopes Array< string > No description available tokenURLSecret SecretKeySelector No description available","title":"Fields"},{"location":"fields/#oauth2endpointparam","text":"EndpointParam is for requesting optional fields that should be sent in the oauth request Examples with this field (click to open) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml)","title":"OAuth2EndpointParam"},{"location":"fields/#fields_101","text":"Field Name Field Type Description key string Name is the header name value string Value is the literal value to use for the header","title":"Fields"},{"location":"fields/#external-fields","text":"","title":"External Fields"},{"location":"fields/#objectmeta","text":"ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"ObjectMeta"},{"location":"fields/#fields_102","text":"Field Name Field Type Description annotations Map< string , string > Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations clusterName string The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request. creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers Array< string > Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels Map< string , string > Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels managedFields Array< ManagedFieldsEntry > ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences Array< OwnerReference > List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency ~~ selfLink ~~ ~~ string ~~ ~~SelfLink is a URL representing this object. Populated by the system. Read-only.~~ DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids","title":"Fields"},{"location":"fields/#affinity","text":"Affinity is a group of affinity scheduling rules.","title":"Affinity"},{"location":"fields/#fields_103","text":"Field Name Field Type Description nodeAffinity NodeAffinity Describes node affinity scheduling rules for the pod. podAffinity PodAffinity Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity PodAntiAffinity Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).","title":"Fields"},{"location":"fields/#poddnsconfig","text":"PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml)","title":"PodDNSConfig"},{"location":"fields/#fields_104","text":"Field Name Field Type Description nameservers Array< string > A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options Array< PodDNSConfigOption > A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. searches Array< string > A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed.","title":"Fields"},{"location":"fields/#hostalias","text":"HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file.","title":"HostAlias"},{"location":"fields/#fields_105","text":"Field Name Field Type Description hostnames Array< string > Hostnames for the above IP address. ip string IP address of the host file entry.","title":"Fields"},{"location":"fields/#localobjectreference","text":"LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Examples with this field (click to open) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml)","title":"LocalObjectReference"},{"location":"fields/#fields_106","text":"Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names","title":"Fields"},{"location":"fields/#poddisruptionbudgetspec","text":"PodDisruptionBudgetSpec is a description of a PodDisruptionBudget. Examples with this field (click to open) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml)","title":"PodDisruptionBudgetSpec"},{"location":"fields/#fields_107","text":"Field Name Field Type Description maxUnavailable IntOrString An eviction is allowed if at most \"maxUnavailable\" pods selected by \"selector\" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with \"minAvailable\". minAvailable IntOrString An eviction is allowed if at least \"minAvailable\" pods selected by \"selector\" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying \"100%\". selector LabelSelector Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.","title":"Fields"},{"location":"fields/#podsecuritycontext","text":"PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml)","title":"PodSecurityContext"},{"location":"fields/#fields_108","text":"Field Name Field Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are \"OnRootMismatch\" and \"Always\". If not specified, \"Always\" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups Array< integer > A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls Array< Sysctl > Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.","title":"Fields"},{"location":"fields/#toleration","text":"The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .","title":"Toleration"},{"location":"fields/#fields_109","text":"Field Name Field Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - \"NoExecute\" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - \"NoSchedule\" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - \"PreferNoSchedule\" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - \"Equal\" - \"Exists\" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.","title":"Fields"},{"location":"fields/#persistentvolumeclaim","text":"PersistentVolumeClaim is a user's request for and claim to a persistent volume Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"PersistentVolumeClaim"},{"location":"fields/#fields_110","text":"Field Name Field Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PersistentVolumeClaimSpec Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status PersistentVolumeClaimStatus Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims","title":"Fields"},{"location":"fields/#volume","text":"Volume represents a named volume in a pod that may be accessed by any container in the pod. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml)","title":"Volume"},{"location":"fields/#fields_111","text":"Field Name Field Type Description awsElasticBlockStore AWSElasticBlockStoreVolumeSource AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFileVolumeSource AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs CephFSVolumeSource CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderVolumeSource Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap ConfigMapVolumeSource ConfigMap represents a configMap that should populate this volume csi CSIVolumeSource CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI DownwardAPIVolumeSource DownwardAPI represents downward API about the pod that should populate this volume emptyDir EmptyDirVolumeSource EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral EphemeralVolumeSource Ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc FCVolumeSource FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexVolumeSource FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk ~~ gitRepo ~~ ~~ GitRepoVolumeSource ~~ ~~GitRepo represents a git repository at a particular revision.~~ DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs GlusterfsVolumeSource Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIVolumeSource ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs NFSVolumeSource NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk PhotonPersistentDiskVolumeSource PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected ProjectedVolumeSource Items for all in one resources secrets, configmaps, and downward API quobyte QuobyteVolumeSource Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDVolumeSource RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOVolumeSource ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret SecretVolumeSource Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos StorageOSVolumeSource StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume VsphereVirtualDiskVolumeSource VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine","title":"Fields"},{"location":"fields/#time","text":"Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.","title":"Time"},{"location":"fields/#objectreference","text":"ObjectReference contains enough information to let you inspect or modify the referred object.","title":"ObjectReference"},{"location":"fields/#fields_112","text":"Field Name Field Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids","title":"Fields"},{"location":"fields/#duration","text":"Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml)","title":"Duration"},{"location":"fields/#fields_113","text":"Field Name Field Type Description duration string No description available","title":"Fields"},{"location":"fields/#labelselector","text":"A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Examples with this field (click to open) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml)","title":"LabelSelector"},{"location":"fields/#fields_114","text":"Field Name Field Type Description matchExpressions Array< LabelSelectorRequirement > matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels Map< string , string > matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is \"key\", the operator is \"In\", and the values array contains only \"value\". The requirements are ANDed.","title":"Fields"},{"location":"fields/#intorstring","text":"No description available Examples with this field (click to open) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml)","title":"IntOrString"},{"location":"fields/#container","text":"A single application container that you want to run within a pod. Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml)","title":"Container"},{"location":"fields/#fields_115","text":"Field Name Field Type Description args Array< string > Arguments to the entrypoint. The docker image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command Array< string > Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env Array< EnvVar > List of environment variables to set in the container. Cannot be updated. envFrom Array< EnvFromSource > List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. image string Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - \"Always\" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - \"IfNotPresent\" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - \"Never\" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle Lifecycle Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe Probe Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports Array< ContainerPort > List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated. readinessProbe Probe Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources ResourceRequirements Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext SecurityContext SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe Probe StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - \"FallbackToLogsOnError\" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - \"File\" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices Array< VolumeDevice > volumeDevices is the list of block devices to be used by the container. volumeMounts Array< VolumeMount > Pod volumes to mount into the container's filesystem. Cannot be updated. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.","title":"Fields"},{"location":"fields/#configmapkeyselector","text":"Selects a key from a ConfigMap. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml)","title":"ConfigMapKeySelector"},{"location":"fields/#fields_116","text":"Field Name Field Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined","title":"Fields"},{"location":"fields/#volumemount","text":"VolumeMount describes a mounting of a Volume within a container. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"VolumeMount"},{"location":"fields/#fields_117","text":"Field Name Field Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive.","title":"Fields"},{"location":"fields/#envvar","text":"EnvVar represents an environment variable present in a Container. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml)","title":"EnvVar"},{"location":"fields/#fields_118","text":"Field Name Field Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty.","title":"Fields"},{"location":"fields/#envfromsource","text":"EnvFromSource represents the source of a set of ConfigMaps","title":"EnvFromSource"},{"location":"fields/#fields_119","text":"Field Name Field Type Description configMapRef ConfigMapEnvSource The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef SecretEnvSource The Secret to select from","title":"Fields"},{"location":"fields/#lifecycle","text":"Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.","title":"Lifecycle"},{"location":"fields/#fields_120","text":"Field Name Field Type Description postStart LifecycleHandler PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop LifecycleHandler PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks","title":"Fields"},{"location":"fields/#probe","text":"Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.","title":"Probe"},{"location":"fields/#fields_121","text":"Field Name Field Type Description exec ExecAction Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc GRPCAction GRPC specifies an action involving a GRPC port. This is an alpha field and requires enabling GRPCContainerProbe feature gate. httpGet HTTPGetAction HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket TCPSocketAction TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes","title":"Fields"},{"location":"fields/#containerport","text":"ContainerPort represents a network port in a single container.","title":"ContainerPort"},{"location":"fields/#fields_122","text":"Field Name Field Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to \"TCP\". Possible enum values: - \"SCTP\" is the SCTP protocol. - \"TCP\" is the TCP protocol. - \"UDP\" is the UDP protocol.","title":"Fields"},{"location":"fields/#resourcerequirements","text":"ResourceRequirements describes the compute resource requirements. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml)","title":"ResourceRequirements"},{"location":"fields/#fields_123","text":"Field Name Field Type Description limits Quantity Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests Quantity Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/","title":"Fields"},{"location":"fields/#securitycontext","text":"SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Examples with this field (click to open) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml)","title":"SecurityContext"},{"location":"fields/#fields_124","text":"Field Name Field Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities Capabilities The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions SELinuxOptions The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile SeccompProfile The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions WindowsSecurityContextOptions The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.","title":"Fields"},{"location":"fields/#volumedevice","text":"volumeDevice describes a mapping of a raw block device within a container.","title":"VolumeDevice"},{"location":"fields/#fields_125","text":"Field Name Field Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod","title":"Fields"},{"location":"fields/#secretkeyselector","text":"SecretKeySelector selects a key of a Secret. Examples with this field (click to open) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml)","title":"SecretKeySelector"},{"location":"fields/#fields_126","text":"Field Name Field Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined","title":"Fields"},{"location":"fields/#managedfieldsentry","text":"ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.","title":"ManagedFieldsEntry"},{"location":"fields/#fields_127","text":"Field Name Field Type Description apiVersion string APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted. fieldsType string FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\" fieldsV1 FieldsV1 FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type. manager string Manager is an identifier of the workflow managing these fields. operation string Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'. subresource string Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource. time Time Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply'","title":"Fields"},{"location":"fields/#ownerreference","text":"OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.","title":"OwnerReference"},{"location":"fields/#fields_128","text":"Field Name Field Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names uid string UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids","title":"Fields"},{"location":"fields/#nodeaffinity","text":"Node affinity is a group of node affinity scheduling rules.","title":"NodeAffinity"},{"location":"fields/#fields_129","text":"Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< PreferredSchedulingTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution NodeSelector If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.","title":"Fields"},{"location":"fields/#podaffinity","text":"Pod affinity is a group of inter pod affinity scheduling rules.","title":"PodAffinity"},{"location":"fields/#fields_130","text":"Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.","title":"Fields"},{"location":"fields/#podantiaffinity","text":"Pod anti affinity is a group of inter pod anti affinity scheduling rules.","title":"PodAntiAffinity"},{"location":"fields/#fields_131","text":"Field Name Field Type Description preferredDuringSchedulingIgnoredDuringExecution Array< WeightedPodAffinityTerm > The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. requiredDuringSchedulingIgnoredDuringExecution Array< PodAffinityTerm > If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.","title":"Fields"},{"location":"fields/#poddnsconfigoption","text":"PodDNSConfigOption defines DNS resolver options of a pod. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml)","title":"PodDNSConfigOption"},{"location":"fields/#fields_132","text":"Field Name Field Type Description name string Required. value string No description available","title":"Fields"},{"location":"fields/#selinuxoptions","text":"SELinuxOptions are the labels to be applied to the container","title":"SELinuxOptions"},{"location":"fields/#fields_133","text":"Field Name Field Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container.","title":"Fields"},{"location":"fields/#seccompprofile","text":"SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set.","title":"SeccompProfile"},{"location":"fields/#fields_134","text":"Field Name Field Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - \"Localhost\" indicates a profile defined in a file on the node should be used. The file's location relative to /seccomp. - \"RuntimeDefault\" represents the default container runtime seccomp profile. - \"Unconfined\" indicates no seccomp profile is applied (A.K.A. unconfined).","title":"Fields"},{"location":"fields/#sysctl","text":"Sysctl defines a kernel parameter to be set","title":"Sysctl"},{"location":"fields/#fields_135","text":"Field Name Field Type Description name string Name of a property to set value string Value of a property to set","title":"Fields"},{"location":"fields/#windowssecuritycontextoptions","text":"WindowsSecurityContextOptions contain Windows-specific options and credentials.","title":"WindowsSecurityContextOptions"},{"location":"fields/#fields_136","text":"Field Name Field Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.","title":"Fields"},{"location":"fields/#persistentvolumeclaimspec","text":"PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Examples with this field (click to open) - [`archive-location.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/archive-location.yaml) - [`arguments-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-artifacts.yaml) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`arguments-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters.yaml) - [`artifact-disable-archive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-disable-archive.yaml) - [`artifact-gc-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-gc-workflow.yaml) - [`artifact-passing-subpath.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing-subpath.yaml) - [`artifact-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-passing.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`artifact-repository-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-repository-ref.yaml) - [`artifactory-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifactory-artifact.yaml) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`ci-output-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-output-artifact.yaml) - [`ci-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml) - [`ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/ci.yaml) - [`cluster-wftmpl-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml) - [`clustertemplates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/clustertemplates.yaml) - [`mixed-cluster-namespaced-wftmpl-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/mixed-cluster-namespaced-wftmpl-steps.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cluster-workflow-template/workflow-template-ref.yaml) - [`coinflip-recursive.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip-recursive.yaml) - [`coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/coinflip.yaml) - [`colored-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/colored-logs.yaml) - [`conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-artifacts.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`conditionals-complex.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals-complex.yaml) - [`conditionals.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditionals.yaml) - [`graph-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/graph-workflow.yaml) - [`outputs-result-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/outputs-result-workflow.yaml) - [`parallel-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/parallel-workflow.yaml) - [`sequence-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/sequence-workflow.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/continue-on-fail.yaml) - [`cron-backfill.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-backfill.yaml) - [`cron-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/cron-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-coinflip.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-coinflip.yaml) - [`dag-conditional-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-artifacts.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`dag-continue-on-fail.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-continue-on-fail.yaml) - [`dag-custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-custom-metrics.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`dag-diamond-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond-steps.yaml) - [`dag-diamond.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-diamond.yaml) - [`dag-disable-failFast.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-disable-failFast.yaml) - [`dag-enhanced-depends.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-enhanced-depends.yaml) - [`dag-inline-clusterworkflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-clusterworkflowtemplate.yaml) - [`dag-inline-cronworkflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-cronworkflow.yaml) - [`dag-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflow.yaml) - [`dag-inline-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-inline-workflowtemplate.yaml) - [`dag-multiroot.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-multiroot.yaml) - [`dag-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-nested.yaml) - [`dag-targets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-targets.yaml) - [`dag-task-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-task-level-timeout.yaml) - [`data-transformations.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/data-transformations.yaml) - [`default-pdb-support.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/default-pdb-support.yaml) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`exit-code-output-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-code-output-variable.yaml) - [`exit-handler-dag-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-dag-level.yaml) - [`exit-handler-slack.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-slack.yaml) - [`exit-handler-step-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-step-level.yaml) - [`exit-handler-with-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-artifacts.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`exit-handlers.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handlers.yaml) - [`expression-destructure-json.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-destructure-json.yaml) - [`expression-reusing-verbose-snippets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-reusing-verbose-snippets.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`forever.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/forever.yaml) - [`fun-with-gifs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fun-with-gifs.yaml) - [`gc-ttl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/gc-ttl.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`global-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`hdfs-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hdfs-artifact.yaml) - [`hello-hybrid.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-hybrid.yaml) - [`hello-windows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-windows.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/hello-world.yaml) - [`http-hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-hello-world.yaml) - [`http-success-condition.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/http-success-condition.yaml) - [`image-pull-secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/image-pull-secrets.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`input-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-azure.yaml) - [`input-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-gcs.yaml) - [`input-artifact-git.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml) - [`input-artifact-http.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-http.yaml) - [`input-artifact-oss.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-oss.yaml) - [`input-artifact-raw.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-raw.yaml) - [`input-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-s3.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-json-patch-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-json-patch-workflow.yaml) - [`k8s-owner-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-owner-reference.yaml) - [`k8s-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-patch.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`key-only-artifact.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/key-only-artifact.yaml) - [`label-value-from-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/label-value-from-workflow.yaml) - [`life-cycle-hooks-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-tmpl-level.yaml) - [`life-cycle-hooks-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/life-cycle-hooks-wf-level.yaml) - [`loops-arbitrary-sequential-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-arbitrary-sequential-steps.yaml) - [`loops-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-dag.yaml) - [`loops-maps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-maps.yaml) - [`loops-param-argument.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-argument.yaml) - [`loops-param-result.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-param-result.yaml) - [`loops-sequence.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops-sequence.yaml) - [`loops.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/loops.yaml) - [`map-reduce.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/map-reduce.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`node-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/node-selector.yaml) - [`output-artifact-azure.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-azure.yaml) - [`output-artifact-gcs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-gcs.yaml) - [`output-artifact-s3.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parallelism-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-limit.yaml) - [`parallelism-nested-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-dag.yaml) - [`parallelism-nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested-workflow.yaml) - [`parallelism-nested.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-nested.yaml) - [`parallelism-template-limit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parallelism-template-limit.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-script.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-gc-strategy-with-label-selector.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy-with-label-selector.yaml) - [`pod-gc-strategy.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-gc-strategy.yaml) - [`pod-metadata-wf-field.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata-wf-field.yaml) - [`pod-metadata.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-metadata.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml) - [`recursive-for-loop.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/recursive-for-loop.yaml) - [`resubmit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/resubmit.yaml) - [`retry-backoff.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-backoff.yaml) - [`retry-conditional.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-conditional.yaml) - [`retry-container-to-completion.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container-to-completion.yaml) - [`retry-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-container.yaml) - [`retry-on-error.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-on-error.yaml) - [`retry-script.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-script.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/retry-with-steps.yaml) - [`scripts-bash.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-bash.yaml) - [`scripts-javascript.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-javascript.yaml) - [`scripts-python.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/scripts-python.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`sidecar-dind.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-dind.yaml) - [`sidecar-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar-nginx.yaml) - [`sidecar.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/sidecar.yaml) - [`status-reference.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/status-reference.yaml) - [`step-level-timeout.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/step-level-timeout.yaml) - [`steps-inline-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps-inline-workflow.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/steps.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml) - [`suspend-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template.yaml) - [`synchronization-mutex-tmpl-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-tmpl-level.yaml) - [`synchronization-mutex-wf-level.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/synchronization-mutex-wf-level.yaml) - [`template-defaults.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-defaults.yaml) - [`template-on-exit.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/template-on-exit.yaml) - [`timeouts-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-step.yaml) - [`timeouts-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/timeouts-workflow.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml) - [`volumes-pvc.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-pvc.yaml) - [`webhdfs-input-output-artifacts.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/webhdfs-input-output-artifacts.yaml) - [`work-avoidance.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/work-avoidance.yaml) - [`event-consumer-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-event-binding/event-consumer-workflowtemplate.yaml) - [`workflow-of-workflows.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-of-workflows.yaml) - [`dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/dag.yaml) - [`hello-world.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/hello-world.yaml) - [`retry-with-steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/retry-with-steps.yaml) - [`steps.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/steps.yaml) - [`templates.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/templates.yaml) - [`workflow-archive-logs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-archive-logs.yaml) - [`workflow-template-ref-with-entrypoint-arg-passing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref-with-entrypoint-arg-passing.yaml) - [`workflow-template-ref.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/workflow-template/workflow-template-ref.yaml)","title":"PersistentVolumeClaimSpec"},{"location":"fields/#fields_137","text":"Field Name Field Type Description accessModes Array< string > AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource TypedLocalObjectReference This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef TypedLocalObjectReference Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources ResourceRequirements Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector LabelSelector A label query over volumes to consider for binding. storageClassName string Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string VolumeName is the binding reference to the PersistentVolume backing this claim.","title":"Fields"},{"location":"fields/#persistentvolumeclaimstatus","text":"PersistentVolumeClaimStatus is the current status of a persistent volume claim.","title":"PersistentVolumeClaimStatus"},{"location":"fields/#fields_138","text":"Field Name Field Type Description accessModes Array< string > AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources Quantity The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity Quantity Represents the actual resources of the underlying volume. conditions Array< PersistentVolumeClaimCondition > Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. phase string Phase represents the current phase of PersistentVolumeClaim. Possible enum values: - \"Bound\" used for PersistentVolumeClaims that are bound - \"Lost\" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - \"Pending\" used for PersistentVolumeClaims that are not yet bound resizeStatus string ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.","title":"Fields"},{"location":"fields/#awselasticblockstorevolumesource","text":"Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling.","title":"AWSElasticBlockStoreVolumeSource"},{"location":"fields/#fields_139","text":"Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). readOnly boolean Specify \"true\" to force and set the ReadOnly property in VolumeMounts to \"true\". If omitted, the default is \"false\". More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string Unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore","title":"Fields"},{"location":"fields/#azurediskvolumesource","text":"AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.","title":"AzureDiskVolumeSource"},{"location":"fields/#fields_140","text":"Field Name Field Type Description cachingMode string Host Caching mode: None, Read Only, Read Write. diskName string The Name of the data disk in the blob storage diskURI string The URI the data disk in the blob storage fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. kind string Expected values Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.","title":"Fields"},{"location":"fields/#azurefilevolumesource","text":"AzureFile represents an Azure File Service mount on the host and bind mount to the pod.","title":"AzureFileVolumeSource"},{"location":"fields/#fields_141","text":"Field Name Field Type Description readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string the name of secret that contains Azure Storage Account Name and Key shareName string Share Name","title":"Fields"},{"location":"fields/#cephfsvolumesource","text":"Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling.","title":"CephFSVolumeSource"},{"location":"fields/#fields_142","text":"Field Name Field Type Description monitors Array< string > Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef LocalObjectReference Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it","title":"Fields"},{"location":"fields/#cindervolumesource","text":"Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.","title":"CinderVolumeSource"},{"location":"fields/#fields_143","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef LocalObjectReference Optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volume id used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md","title":"Fields"},{"location":"fields/#configmapvolumesource","text":"Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"ConfigMapVolumeSource"},{"location":"fields/#fields_144","text":"Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined","title":"Fields"},{"location":"fields/#csivolumesource","text":"Represents a source location of a volume to mount, managed by an external CSI driver","title":"CSIVolumeSource"},{"location":"fields/#fields_145","text":"Field Name Field Type Description driver string Driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string Filesystem type to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference NodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean Specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes Map< string , string > VolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values.","title":"Fields"},{"location":"fields/#downwardapivolumesource","text":"DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling.","title":"DownwardAPIVolumeSource"},{"location":"fields/#fields_146","text":"Field Name Field Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< DownwardAPIVolumeFile > Items is a list of downward API volume file","title":"Fields"},{"location":"fields/#emptydirvolumesource","text":"Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`artifacts-workflowtemplate.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifacts-workflowtemplate.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`init-container.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/init-container.yaml) - [`volumes-emptydir.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-emptydir.yaml)","title":"EmptyDirVolumeSource"},{"location":"fields/#fields_147","text":"Field Name Field Type Description medium string What type of storage medium should back this directory. The default is \"\" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir","title":"Fields"},{"location":"fields/#ephemeralvolumesource","text":"Represents an ephemeral volume that is handled by a normal storage driver.","title":"EphemeralVolumeSource"},{"location":"fields/#fields_148","text":"Field Name Field Type Description volumeClaimTemplate PersistentVolumeClaimTemplate Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be - where is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil.","title":"Fields"},{"location":"fields/#fcvolumesource","text":"Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling.","title":"FCVolumeSource"},{"location":"fields/#fields_149","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. lun integer Optional: FC target lun number readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs Array< string > Optional: FC target worldwide names (WWNs) wwids Array< string > Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.","title":"Fields"},{"location":"fields/#flexvolumesource","text":"FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.","title":"FlexVolumeSource"},{"location":"fields/#fields_150","text":"Field Name Field Type Description driver string Driver is the name of the driver to use for this volume. fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script. options Map< string , string > Optional: Extra command options if any. readOnly boolean Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.","title":"Fields"},{"location":"fields/#flockervolumesource","text":"Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling.","title":"FlockerVolumeSource"},{"location":"fields/#fields_151","text":"Field Name Field Type Description datasetName string Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated datasetUUID string UUID of the dataset. This is unique identifier of a Flocker dataset","title":"Fields"},{"location":"fields/#gcepersistentdiskvolumesource","text":"Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling.","title":"GCEPersistentDiskVolumeSource"},{"location":"fields/#fields_152","text":"Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as \"1\". Similarly, the volume partition for /dev/sda is \"0\" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string Unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk","title":"Fields"},{"location":"fields/#gitrepovolumesource","text":"Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.","title":"GitRepoVolumeSource"},{"location":"fields/#fields_153","text":"Field Name Field Type Description directory string Target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string Repository URL revision string Commit hash for the specified revision.","title":"Fields"},{"location":"fields/#glusterfsvolumesource","text":"Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling.","title":"GlusterfsVolumeSource"},{"location":"fields/#fields_154","text":"Field Name Field Type Description endpoints string EndpointsName is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string Path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean ReadOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod","title":"Fields"},{"location":"fields/#hostpathvolumesource","text":"Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling.","title":"HostPathVolumeSource"},{"location":"fields/#fields_155","text":"Field Name Field Type Description path string Path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string Type for HostPath Volume Defaults to \"\" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath","title":"Fields"},{"location":"fields/#iscsivolumesource","text":"Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling.","title":"ISCSIVolumeSource"},{"location":"fields/#fields_156","text":"Field Name Field Type Description chapAuthDiscovery boolean whether support iSCSI Discovery CHAP authentication chapAuthSession boolean whether support iSCSI Session CHAP authentication fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string Custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. iqn string Target iSCSI Qualified Name. iscsiInterface string iSCSI Interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer iSCSI Target Lun number. portals Array< string > iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef LocalObjectReference CHAP Secret for iSCSI target and initiator authentication targetPortal string iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).","title":"Fields"},{"location":"fields/#nfsvolumesource","text":"Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling.","title":"NFSVolumeSource"},{"location":"fields/#fields_157","text":"Field Name Field Type Description path string Path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean ReadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string Server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs","title":"Fields"},{"location":"fields/#persistentvolumeclaimvolumesource","text":"PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Examples with this field (click to open) - [`volumes-existing.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/volumes-existing.yaml)","title":"PersistentVolumeClaimVolumeSource"},{"location":"fields/#fields_158","text":"Field Name Field Type Description claimName string ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean Will force the ReadOnly setting in VolumeMounts. Default false.","title":"Fields"},{"location":"fields/#photonpersistentdiskvolumesource","text":"Represents a Photon Controller persistent disk resource.","title":"PhotonPersistentDiskVolumeSource"},{"location":"fields/#fields_159","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. pdID string ID that identifies Photon Controller persistent disk","title":"Fields"},{"location":"fields/#portworxvolumesource","text":"PortworxVolumeSource represents a Portworx volume resource.","title":"PortworxVolumeSource"},{"location":"fields/#fields_160","text":"Field Name Field Type Description fsType string FSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string VolumeID uniquely identifies a Portworx volume","title":"Fields"},{"location":"fields/#projectedvolumesource","text":"Represents a projected volume source","title":"ProjectedVolumeSource"},{"location":"fields/#fields_161","text":"Field Name Field Type Description defaultMode integer Mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources Array< VolumeProjection > list of volume projections","title":"Fields"},{"location":"fields/#quobytevolumesource","text":"Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling.","title":"QuobyteVolumeSource"},{"location":"fields/#fields_162","text":"Field Name Field Type Description group string Group to map volume access to Default is no group readOnly boolean ReadOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string Registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string Tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string User to map volume access to Defaults to serivceaccount user volume string Volume is a string that references an already created Quobyte volume by name.","title":"Fields"},{"location":"fields/#rbdvolumesource","text":"Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.","title":"RBDVolumeSource"},{"location":"fields/#fields_163","text":"Field Name Field Type Description fsType string Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string The rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string Keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors Array< string > A collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string The rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean ReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef LocalObjectReference SecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string The rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it","title":"Fields"},{"location":"fields/#scaleiovolumesource","text":"ScaleIOVolumeSource represents a persistent ScaleIO volume","title":"ScaleIOVolumeSource"},{"location":"fields/#fields_164","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\". gateway string The host address of the ScaleIO API Gateway. protectionDomain string The name of the ScaleIO Protection Domain for the configured storage. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean Flag to enable/disable SSL communication with Gateway, default false storageMode string Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string The ScaleIO Storage Pool associated with the protection domain. system string The name of the storage system as configured in ScaleIO. volumeName string The name of a volume already created in the ScaleIO system that is associated with this volume source.","title":"Fields"},{"location":"fields/#secretvolumesource","text":"Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml)","title":"SecretVolumeSource"},{"location":"fields/#fields_165","text":"Field Name Field Type Description defaultMode integer Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean Specify whether the Secret or its keys must be defined secretName string Name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret","title":"Fields"},{"location":"fields/#storageosvolumesource","text":"Represents a StorageOS persistent volume resource.","title":"StorageOSVolumeSource"},{"location":"fields/#fields_166","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. readOnly boolean Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef LocalObjectReference SecretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string VolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string VolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.","title":"Fields"},{"location":"fields/#vspherevirtualdiskvolumesource","text":"Represents a vSphere volume resource.","title":"VsphereVirtualDiskVolumeSource"},{"location":"fields/#fields_167","text":"Field Name Field Type Description fsType string Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. storagePolicyID string Storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string Storage Policy Based Management (SPBM) profile name. volumePath string Path that identifies vSphere volume vmdk","title":"Fields"},{"location":"fields/#labelselectorrequirement","text":"A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.","title":"LabelSelectorRequirement"},{"location":"fields/#fields_168","text":"Field Name Field Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values Array< string > values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.","title":"Fields"},{"location":"fields/#envvarsource","text":"EnvVarSource represents a source for the value of an EnvVar. Examples with this field (click to open) - [`arguments-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/arguments-parameters-from-configmap.yaml) - [`artifact-path-placeholders.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/artifact-path-placeholders.yaml) - [`conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/conditional-parameters.yaml) - [`workspace-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/container-set-template/workspace-workflow.yaml) - [`custom-metrics.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/custom-metrics.yaml) - [`dag-conditional-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-conditional-parameters.yaml) - [`exit-handler-with-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/exit-handler-with-param.yaml) - [`expression-tag-template-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/expression-tag-template-workflow.yaml) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml) - [`global-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-outputs.yaml) - [`global-parameters-from-configmap-referenced-as-local-variable.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap-referenced-as-local-variable.yaml) - [`global-parameters-from-configmap.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/global-parameters-from-configmap.yaml) - [`handle-large-output-results.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/handle-large-output-results.yaml) - [`intermediate-parameters.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/intermediate-parameters.yaml) - [`k8s-wait-wf.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/k8s-wait-wf.yaml) - [`nested-workflow.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/nested-workflow.yaml) - [`output-parameter.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/output-parameter.yaml) - [`parameter-aggregation-dag.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation-dag.yaml) - [`parameter-aggregation.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/parameter-aggregation.yaml) - [`pod-spec-from-previous-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-from-previous-step.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml) - [`suspend-template-outputs.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/suspend-template-outputs.yaml)","title":"EnvVarSource"},{"location":"fields/#fields_169","text":"Field Name Field Type Description configMapKeyRef ConfigMapKeySelector Selects a key of a ConfigMap. fieldRef ObjectFieldSelector Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels[''] , metadata.annotations[''] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef SecretKeySelector Selects a key of a secret in the pod's namespace","title":"Fields"},{"location":"fields/#configmapenvsource","text":"ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.","title":"ConfigMapEnvSource"},{"location":"fields/#fields_170","text":"Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined","title":"Fields"},{"location":"fields/#secretenvsource","text":"SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables.","title":"SecretEnvSource"},{"location":"fields/#fields_171","text":"Field Name Field Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined","title":"Fields"},{"location":"fields/#lifecyclehandler","text":"LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified.","title":"LifecycleHandler"},{"location":"fields/#fields_172","text":"Field Name Field Type Description exec ExecAction Exec specifies the action to take. httpGet HTTPGetAction HTTPGet specifies the http request to perform. tcpSocket TCPSocketAction Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.","title":"Fields"},{"location":"fields/#execaction","text":"ExecAction describes a \"run in container\" action. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml)","title":"ExecAction"},{"location":"fields/#fields_173","text":"Field Name Field Type Description command Array< string > Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('","title":"Fields"},{"location":"fields/#grpcaction","text":"No description available","title":"GRPCAction"},{"location":"fields/#fields_174","text":"Field Name Field Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC.","title":"Fields"},{"location":"fields/#httpgetaction","text":"HTTPGetAction describes an action based on HTTP Get requests. Examples with this field (click to open) - [`daemon-nginx.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-nginx.yaml) - [`daemon-step.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/daemon-step.yaml) - [`dag-daemon-task.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dag-daemon-task.yaml) - [`influxdb-ci.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/influxdb-ci.yaml)","title":"HTTPGetAction"},{"location":"fields/#fields_175","text":"Field Name Field Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead. httpHeaders Array< HTTPHeader > Custom headers to set in the request. HTTP allows repeated headers. path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - \"HTTP\" means that the scheme used will be http:// - \"HTTPS\" means that the scheme used will be https://","title":"Fields"},{"location":"fields/#tcpsocketaction","text":"TCPSocketAction describes an action based on opening a socket","title":"TCPSocketAction"},{"location":"fields/#fields_176","text":"Field Name Field Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.","title":"Fields"},{"location":"fields/#quantity","text":"Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: ::= (Note that may be empty, from the \"\" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= \"+\" | \"-\" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | \"\" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= \"e\" | \"E\" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in \"canonical form\". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as \"1500m\" 1.5Gi will be serialized as \"1536Mi\" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Examples with this field (click to open) - [`dns-config.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/dns-config.yaml) - [`pod-spec-patch-wf-tmpl.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-patch-wf-tmpl.yaml) - [`pod-spec-yaml-patch.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/pod-spec-yaml-patch.yaml)","title":"Quantity"},{"location":"fields/#capabilities","text":"Adds and removes POSIX capabilities from running containers.","title":"Capabilities"},{"location":"fields/#fields_177","text":"Field Name Field Type Description add Array< string > Added capabilities drop Array< string > Removed capabilities","title":"Fields"},{"location":"fields/#fieldsv1","text":"FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format. Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f: ', where is the name of a field in a struct, or key in a map 'v: ', where is the exact json formatted value of a list item 'i: ', where is position of a item in a list 'k: ', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff","title":"FieldsV1"},{"location":"fields/#preferredschedulingterm","text":"An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).","title":"PreferredSchedulingTerm"},{"location":"fields/#fields_178","text":"Field Name Field Type Description preference NodeSelectorTerm A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.","title":"Fields"},{"location":"fields/#nodeselector","text":"A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.","title":"NodeSelector"},{"location":"fields/#fields_179","text":"Field Name Field Type Description nodeSelectorTerms Array< NodeSelectorTerm > Required. A list of node selector terms. The terms are ORed.","title":"Fields"},{"location":"fields/#weightedpodaffinityterm","text":"The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)","title":"WeightedPodAffinityTerm"},{"location":"fields/#fields_180","text":"Field Name Field Type Description podAffinityTerm PodAffinityTerm Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100.","title":"Fields"},{"location":"fields/#podaffinityterm","text":"Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running","title":"PodAffinityTerm"},{"location":"fields/#fields_181","text":"Field Name Field Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces. This field is beta-level and is only honored when PodAffinityNamespaceSelector feature is enabled. namespaces Array< string > namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\" topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.","title":"Fields"},{"location":"fields/#typedlocalobjectreference","text":"TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace.","title":"TypedLocalObjectReference"},{"location":"fields/#fields_182","text":"Field Name Field Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced","title":"Fields"},{"location":"fields/#persistentvolumeclaimcondition","text":"PersistentVolumeClaimCondition contails details about state of pvc","title":"PersistentVolumeClaimCondition"},{"location":"fields/#fields_183","text":"Field Name Field Type Description lastProbeTime Time Last time we probed the condition. lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports \"ResizeStarted\" that means the underlying persistent volume is being resized. status string No description available type string Possible enum values: - \"FileSystemResizePending\" - controller resize is finished and a file system resize is pending on node - \"Resizing\" - a user trigger resize of pvc has been started","title":"Fields"},{"location":"fields/#keytopath","text":"Maps a string key to a path within a volume.","title":"KeyToPath"},{"location":"fields/#fields_184","text":"Field Name Field Type Description key string The key to project. mode integer Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string The relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.","title":"Fields"},{"location":"fields/#downwardapivolumefile","text":"DownwardAPIVolumeFile represents information to create the file containing the pod field","title":"DownwardAPIVolumeFile"},{"location":"fields/#fields_185","text":"Field Name Field Type Description fieldRef ObjectFieldSelector Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef ResourceFieldSelector Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.","title":"Fields"},{"location":"fields/#persistentvolumeclaimtemplate","text":"PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource.","title":"PersistentVolumeClaimTemplate"},{"location":"fields/#fields_186","text":"Field Name Field Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec PersistentVolumeClaimSpec The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.","title":"Fields"},{"location":"fields/#volumeprojection","text":"Projection that may be projected along with other supported volume types","title":"VolumeProjection"},{"location":"fields/#fields_187","text":"Field Name Field Type Description configMap ConfigMapProjection information about the configMap data to project downwardAPI DownwardAPIProjection information about the downwardAPI data to project secret SecretProjection information about the secret data to project serviceAccountToken ServiceAccountTokenProjection information about the serviceAccountToken data to project","title":"Fields"},{"location":"fields/#objectfieldselector","text":"ObjectFieldSelector selects an APIVersioned field of an object.","title":"ObjectFieldSelector"},{"location":"fields/#fields_188","text":"Field Name Field Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to \"v1\". fieldPath string Path of the field to select in the specified API version.","title":"Fields"},{"location":"fields/#resourcefieldselector","text":"ResourceFieldSelector represents container resources (cpu, memory) and their output format","title":"ResourceFieldSelector"},{"location":"fields/#fields_189","text":"Field Name Field Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to \"1\" resource string Required: resource to select","title":"Fields"},{"location":"fields/#httpheader_1","text":"HTTPHeader describes a custom header to be used in HTTP probes","title":"HTTPHeader"},{"location":"fields/#fields_190","text":"Field Name Field Type Description name string The header field name value string The header field value","title":"Fields"},{"location":"fields/#nodeselectorterm","text":"A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.","title":"NodeSelectorTerm"},{"location":"fields/#fields_191","text":"Field Name Field Type Description matchExpressions Array< NodeSelectorRequirement > A list of node selector requirements by node's labels. matchFields Array< NodeSelectorRequirement > A list of node selector requirements by node's fields.","title":"Fields"},{"location":"fields/#configmapprojection","text":"Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Examples with this field (click to open) - [`fibonacci-seq-conditional-param.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/fibonacci-seq-conditional-param.yaml)","title":"ConfigMapProjection"},{"location":"fields/#fields_192","text":"Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its keys must be defined","title":"Fields"},{"location":"fields/#downwardapiprojection","text":"Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode.","title":"DownwardAPIProjection"},{"location":"fields/#fields_193","text":"Field Name Field Type Description items Array< DownwardAPIVolumeFile > Items is a list of DownwardAPIVolume file","title":"Fields"},{"location":"fields/#secretprojection","text":"Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Examples with this field (click to open) - [`buildkit-template.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/buildkit-template.yaml) - [`secrets.yaml`](https://github.com/argoproj/argo-workflows/blob/main/examples/secrets.yaml)","title":"SecretProjection"},{"location":"fields/#fields_194","text":"Field Name Field Type Description items Array< KeyToPath > If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined","title":"Fields"},{"location":"fields/#serviceaccounttokenprojection","text":"ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise).","title":"ServiceAccountTokenProjection"},{"location":"fields/#fields_195","text":"Field Name Field Type Description audience string Audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string Path is the path relative to the mount point of the file to project the token into.","title":"Fields"},{"location":"fields/#nodeselectorrequirement","text":"A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.","title":"NodeSelectorRequirement"},{"location":"fields/#fields_196","text":"Field Name Field Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - \"DoesNotExist\" - \"Exists\" - \"Gt\" - \"In\" - \"Lt\" - \"NotIn\" values Array< string > An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.","title":"Fields"},{"location":"high-availability/","text":"High-Availability (HA) \u00b6 Workflow Controller \u00b6 Before v3.0, only one controller could run at once. (If it crashed, Kubernetes would start another pod.) v3.0 For many users, a short loss of workflow service may be acceptable - the new controller will just continue running workflows if it restarts. However, with high service guarantees, new pods may take too long to start running workflows. You should run two replicas, and one of which will be kept on hot-standby. A voluntary pod disruption can cause both replicas to be replaced at the same time. You should use a Pod Disruption Budget to prevent this and Pod Priority to recover faster from an involuntary pod disruption: Pod Disruption Budget Pod Priority Argo Server \u00b6 v2.6 Run a minimum of two replicas, typically three, should be run, otherwise it may be possible that API and webhook requests are dropped. Tip Consider using multi AZ-deployment using pod anti-affinity .","title":"High-Availability (HA)"},{"location":"high-availability/#high-availability-ha","text":"","title":"High-Availability (HA)"},{"location":"high-availability/#workflow-controller","text":"Before v3.0, only one controller could run at once. (If it crashed, Kubernetes would start another pod.) v3.0 For many users, a short loss of workflow service may be acceptable - the new controller will just continue running workflows if it restarts. However, with high service guarantees, new pods may take too long to start running workflows. You should run two replicas, and one of which will be kept on hot-standby. A voluntary pod disruption can cause both replicas to be replaced at the same time. You should use a Pod Disruption Budget to prevent this and Pod Priority to recover faster from an involuntary pod disruption: Pod Disruption Budget Pod Priority","title":"Workflow Controller"},{"location":"high-availability/#argo-server","text":"v2.6 Run a minimum of two replicas, typically three, should be run, otherwise it may be possible that API and webhook requests are dropped. Tip Consider using multi AZ-deployment using pod anti-affinity .","title":"Argo Server"},{"location":"http-template/","text":"HTTP Template \u00b6 v3.2 and after HTTP Template is a type of template which can execute HTTP Requests. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : http-template- spec : entrypoint : main templates : - name : main steps : - - name : get-google-homepage template : http arguments : parameters : [{ name : url , value : \"https://www.google.com\" }] - name : http inputs : parameters : - name : url http : timeoutSeconds : 20 # Default 30 url : \"{{inputs.parameters.url}}\" method : \"GET\" # Default GET headers : - name : \"x-header-name\" value : \"test-value\" # Template will succeed if evaluated to true, otherwise will fail # Available variables: # request.body: string, the request body # request.headers: map[string][]string, the request headers # response.url: string, the request url # response.method: string, the request method # response.statusCode: int, the response status code # response.body: string, the response body # response.headers: map[string][]string, the response headers successCondition : \"response.body contains \\\"google\\\"\" # available since v3.3 body : \"test body\" # Change request body Argo Agent \u00b6 HTTP Templates use the Argo Agent, which executes the requests independently of the controller. The Agent and the Workflow Controller communicate through the WorkflowTaskSet CRD, which is created for each running Workflow that requires the use of the Agent . In order to use the Argo Agent, you will need to ensure that you have added the appropriate workflow RBAC to add an agent role with to Argo Workflows. An example agent role can be found in the quick-start manifests .","title":"HTTP Template"},{"location":"http-template/#http-template","text":"v3.2 and after HTTP Template is a type of template which can execute HTTP Requests. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : http-template- spec : entrypoint : main templates : - name : main steps : - - name : get-google-homepage template : http arguments : parameters : [{ name : url , value : \"https://www.google.com\" }] - name : http inputs : parameters : - name : url http : timeoutSeconds : 20 # Default 30 url : \"{{inputs.parameters.url}}\" method : \"GET\" # Default GET headers : - name : \"x-header-name\" value : \"test-value\" # Template will succeed if evaluated to true, otherwise will fail # Available variables: # request.body: string, the request body # request.headers: map[string][]string, the request headers # response.url: string, the request url # response.method: string, the request method # response.statusCode: int, the response status code # response.body: string, the response body # response.headers: map[string][]string, the response headers successCondition : \"response.body contains \\\"google\\\"\" # available since v3.3 body : \"test body\" # Change request body","title":"HTTP Template"},{"location":"http-template/#argo-agent","text":"HTTP Templates use the Argo Agent, which executes the requests independently of the controller. The Agent and the Workflow Controller communicate through the WorkflowTaskSet CRD, which is created for each running Workflow that requires the use of the Agent . In order to use the Argo Agent, you will need to ensure that you have added the appropriate workflow RBAC to add an agent role with to Argo Workflows. An example agent role can be found in the quick-start manifests .","title":"Argo Agent"},{"location":"ide-setup/","text":"IDE Set-Up \u00b6 Validating Argo YAML against the JSON Schema \u00b6 Argo provides a JSON Schema that enables validation of YAML resources in your IDE. JetBrains IDEs (Community & Ultimate Editions) \u00b6 YAML validation is supported natively in IDEA. Configure your IDE to reference the Argo schema and map it to your Argo YAML files: The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that you may need to restart IDEA to pick up the changes. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete. JetBrains IDEs (Community & Ultimate Editions) + Kubernetes Plugin \u00b6 If you have the JetBrains Kubernetes Plugin installed in your IDE, the validation can be configured in the Kubernetes plugin settings instead of using the internal JSON schema file validator. Unlike the previous JSON schema validation method, the plugin detects the necessary validation based on Kubernetes resource definition keys and does not require a file glob pattern. Like the previously described method: The schema is located here . Note that you may need to restart IDEA to pick up the changes. VSCode \u00b6 The Red Hat YAML plugin will provide error highlighting and auto-completion for Argo resources. Install the Red Hat YAML plugin in VSCode and open extension settings: Open the YAML schema settings: Add the Argo schema setting yaml.schemas : The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that other defined schema with overlapping glob patterns may cause errors. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.","title":"IDE Set-Up"},{"location":"ide-setup/#ide-set-up","text":"","title":"IDE Set-Up"},{"location":"ide-setup/#validating-argo-yaml-against-the-json-schema","text":"Argo provides a JSON Schema that enables validation of YAML resources in your IDE.","title":"Validating Argo YAML against the JSON Schema"},{"location":"ide-setup/#jetbrains-ides-community-ultimate-editions","text":"YAML validation is supported natively in IDEA. Configure your IDE to reference the Argo schema and map it to your Argo YAML files: The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that you may need to restart IDEA to pick up the changes. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.","title":"JetBrains IDEs (Community & Ultimate Editions)"},{"location":"ide-setup/#jetbrains-ides-community-ultimate-editions-kubernetes-plugin","text":"If you have the JetBrains Kubernetes Plugin installed in your IDE, the validation can be configured in the Kubernetes plugin settings instead of using the internal JSON schema file validator. Unlike the previous JSON schema validation method, the plugin detects the necessary validation based on Kubernetes resource definition keys and does not require a file glob pattern. Like the previously described method: The schema is located here . Note that you may need to restart IDEA to pick up the changes.","title":"JetBrains IDEs (Community & Ultimate Editions) + Kubernetes Plugin"},{"location":"ide-setup/#vscode","text":"The Red Hat YAML plugin will provide error highlighting and auto-completion for Argo resources. Install the Red Hat YAML plugin in VSCode and open extension settings: Open the YAML schema settings: Add the Argo schema setting yaml.schemas : The schema is located here . Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project! Note that other defined schema with overlapping glob patterns may cause errors. That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.","title":"VSCode"},{"location":"inline-templates/","text":"Inline Templates \u00b6 v3.2 and after You can inline other templates within DAG and steps. Examples: DAG Steps Warning You can only inline once. Inline a DAG within a DAG will not work.","title":"Inline Templates"},{"location":"inline-templates/#inline-templates","text":"v3.2 and after You can inline other templates within DAG and steps. Examples: DAG Steps Warning You can only inline once. Inline a DAG within a DAG will not work.","title":"Inline Templates"},{"location":"installation/","text":"Installation \u00b6 Non-production installation \u00b6 If you just want to try out Argo Workflows in a non-production environment (including on desktop via minikube/kind/k3d etc) follow the quick-start guide . Production installation \u00b6 Installation Methods \u00b6 Official release manifests \u00b6 To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. You can use Kustomize to patch your preferred configurations on top of the base manifest. \u26a0\ufe0f If you are using GitOps, never use Kustomize remote base: this is dangerous. Instead, copy the manifests into your Git repo. \u26a0\ufe0f latest is tip, not stable. Never run it in production. Argo Workflows Helm Chart \u00b6 You can install Argo Workflows using the community maintained Helm charts . Installation options \u00b6 Determine your base installation option. A cluster install will watch and execute workflows in all namespaces. This is the default installation option when installing using the official release manifests. A namespace install only executes workflows in the namespace it is installed in (typically argo ). Look for namespace-install.yaml in the release assets . A managed namespace install : only executes workflows in a separate namespace from the one it is installed in. See Managed Namespace for more details. Additional installation considerations \u00b6 Review the following: Security . Scaling and running at massive scale . High-availability Disaster recovery","title":"Installation"},{"location":"installation/#installation","text":"","title":"Installation"},{"location":"installation/#non-production-installation","text":"If you just want to try out Argo Workflows in a non-production environment (including on desktop via minikube/kind/k3d etc) follow the quick-start guide .","title":"Non-production installation"},{"location":"installation/#production-installation","text":"","title":"Production installation"},{"location":"installation/#installation-methods","text":"","title":"Installation Methods"},{"location":"installation/#official-release-manifests","text":"To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. You can use Kustomize to patch your preferred configurations on top of the base manifest. \u26a0\ufe0f If you are using GitOps, never use Kustomize remote base: this is dangerous. Instead, copy the manifests into your Git repo. \u26a0\ufe0f latest is tip, not stable. Never run it in production.","title":"Official release manifests"},{"location":"installation/#argo-workflows-helm-chart","text":"You can install Argo Workflows using the community maintained Helm charts .","title":"Argo Workflows Helm Chart"},{"location":"installation/#installation-options","text":"Determine your base installation option. A cluster install will watch and execute workflows in all namespaces. This is the default installation option when installing using the official release manifests. A namespace install only executes workflows in the namespace it is installed in (typically argo ). Look for namespace-install.yaml in the release assets . A managed namespace install : only executes workflows in a separate namespace from the one it is installed in. See Managed Namespace for more details.","title":"Installation options"},{"location":"installation/#additional-installation-considerations","text":"Review the following: Security . Scaling and running at massive scale . High-availability Disaster recovery","title":"Additional installation considerations"},{"location":"intermediate-inputs/","text":"Intermediate Parameters \u00b6 v3.4 and after Traditionally, Argo workflows has supported input parameters from UI only when the workflow starts, and after that, it's pretty much on autopilot. But, there are a lot of use cases where human interaction is required. This interaction is in the form of providing input text in the middle of the workflow, choosing from a dropdown of the options which a workflow step itself is intelligently generating. A similar feature which you can see in jenkins is pipeline-input-step Example use cases include: A human approval before doing something in production environment. Programmatic generation of a list of inputs from which the user chooses. Choosing from a list of available databases which the workflow itself is generating. This feature is achieved via suspend template . The workflow will pause at a Suspend node, and user will be able to update parameters using fields type text or dropdown. Intermediate Parameters Approval Example \u00b6 The below example shows static enum values approval step. The user will be able to choose between [YES, NO] which will be used in subsequent steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-cicd- spec : entrypoint : cicd-pipeline templates : - name : cicd-pipeline steps : - - name : deploy-pre-prod template : deploy - - name : approval template : approval - - name : deploy-prod template : deploy when : '{{steps.approval.outputs.parameters.approve}} == YES' - name : approval suspend : {} inputs : parameters : - name : approve default : 'NO' enum : - 'YES' - 'NO' description : >- Choose YES to continue workflow and deploy to production outputs : parameters : - name : approve valueFrom : supplied : {} - name : deploy container : image : 'argoproj/argosay:v2' command : - /argosay args : - echo - deploying Intermediate Parameters DB Schema Update Example \u00b6 The below example shows programmatic generation of enum values. The generate-db-list template generates an output called db_list . This output is of type json . Since this json has a key called enum , with an array of options, the UI will parse this and display it as a dropdown. The output can be any string also, in which case the UI will display it as a text field. Which the user can later edit. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-db- spec : entrypoint : db-schema-update templates : - name : db-schema-update steps : - - name : generate-db-list template : generate-db-list - - name : choose-db template : choose-db arguments : parameters : - name : db_name value : '{{steps.generate-db-list.outputs.parameters.db_list}}' - - name : update-schema template : update-schema arguments : parameters : - name : db_name value : '{{steps.choose-db.outputs.parameters.db_name}}' - name : generate-db-list outputs : parameters : - name : db_list valueFrom : path : /tmp/db_list.txt container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - >- echo \"{\\\"enum\\\": [\\\"db1\\\", \\\"db2\\\", \\\"db3\\\"]}\" | tee /tmp/db_list.txt - name : choose-db inputs : parameters : - name : db_name description : >- Choose DB to update a schema outputs : parameters : - name : db_name valueFrom : supplied : {} suspend : {} - name : update-schema inputs : parameters : - name : db_name container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - echo Updating DB {{inputs.parameters.db_name}} Some Important Details \u00b6 The suspended node should have the SAME parameters defined in inputs.parameters and outputs.parameters . All the output parameters in the suspended node should have valueFrom.supplied: {} The selected values will be available at .outputs.parameters.","title":"Intermediate Parameters"},{"location":"intermediate-inputs/#intermediate-parameters","text":"v3.4 and after Traditionally, Argo workflows has supported input parameters from UI only when the workflow starts, and after that, it's pretty much on autopilot. But, there are a lot of use cases where human interaction is required. This interaction is in the form of providing input text in the middle of the workflow, choosing from a dropdown of the options which a workflow step itself is intelligently generating. A similar feature which you can see in jenkins is pipeline-input-step Example use cases include: A human approval before doing something in production environment. Programmatic generation of a list of inputs from which the user chooses. Choosing from a list of available databases which the workflow itself is generating. This feature is achieved via suspend template . The workflow will pause at a Suspend node, and user will be able to update parameters using fields type text or dropdown.","title":"Intermediate Parameters"},{"location":"intermediate-inputs/#intermediate-parameters-approval-example","text":"The below example shows static enum values approval step. The user will be able to choose between [YES, NO] which will be used in subsequent steps. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-cicd- spec : entrypoint : cicd-pipeline templates : - name : cicd-pipeline steps : - - name : deploy-pre-prod template : deploy - - name : approval template : approval - - name : deploy-prod template : deploy when : '{{steps.approval.outputs.parameters.approve}} == YES' - name : approval suspend : {} inputs : parameters : - name : approve default : 'NO' enum : - 'YES' - 'NO' description : >- Choose YES to continue workflow and deploy to production outputs : parameters : - name : approve valueFrom : supplied : {} - name : deploy container : image : 'argoproj/argosay:v2' command : - /argosay args : - echo - deploying","title":"Intermediate Parameters Approval Example"},{"location":"intermediate-inputs/#intermediate-parameters-db-schema-update-example","text":"The below example shows programmatic generation of enum values. The generate-db-list template generates an output called db_list . This output is of type json . Since this json has a key called enum , with an array of options, the UI will parse this and display it as a dropdown. The output can be any string also, in which case the UI will display it as a text field. Which the user can later edit. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : intermediate-parameters-db- spec : entrypoint : db-schema-update templates : - name : db-schema-update steps : - - name : generate-db-list template : generate-db-list - - name : choose-db template : choose-db arguments : parameters : - name : db_name value : '{{steps.generate-db-list.outputs.parameters.db_list}}' - - name : update-schema template : update-schema arguments : parameters : - name : db_name value : '{{steps.choose-db.outputs.parameters.db_name}}' - name : generate-db-list outputs : parameters : - name : db_list valueFrom : path : /tmp/db_list.txt container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - >- echo \"{\\\"enum\\\": [\\\"db1\\\", \\\"db2\\\", \\\"db3\\\"]}\" | tee /tmp/db_list.txt - name : choose-db inputs : parameters : - name : db_name description : >- Choose DB to update a schema outputs : parameters : - name : db_name valueFrom : supplied : {} suspend : {} - name : update-schema inputs : parameters : - name : db_name container : name : main image : 'argoproj/argosay:v2' command : - sh - '-c' args : - echo Updating DB {{inputs.parameters.db_name}}","title":"Intermediate Parameters DB Schema Update Example"},{"location":"intermediate-inputs/#some-important-details","text":"The suspended node should have the SAME parameters defined in inputs.parameters and outputs.parameters . All the output parameters in the suspended node should have valueFrom.supplied: {} The selected values will be available at .outputs.parameters.","title":"Some Important Details"},{"location":"key-only-artifacts/","text":"Key-Only Artifacts \u00b6 v3.0 and after A key-only artifact is an input or output artifact where you only specify the key, omitting the bucket, secrets etc. When these are omitted, the bucket/secrets from the configured artifact repository is used. This allows you to move the configuration of the artifact repository out of the workflow specification. This is closely related to artifact repository ref . You'll want to use them together for maximum benefit. This should probably be your default if you're using v3.0: Reduces the size of workflows (improved performance). User owned artifact repository set-up configuration (simplified management). Decouples the artifact location configuration from the workflow. Allowing you to re-configure the artifact repository without changing your workflows or templates. Example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : key-only-artifacts- spec : entrypoint : main templates : - name : main dag : tasks : - name : generate template : generate - name : consume template : consume dependencies : - generate - name : generate container : image : argoproj/argosay:v2 args : [ echo , hello , /mnt/file ] outputs : artifacts : - name : file path : /mnt/file s3 : key : my-file - name : consume container : image : argoproj/argosay:v2 args : [ cat , /tmp/file ] inputs : artifacts : - name : file path : /tmp/file s3 : key : my-file Warning The location data is not longer stored in /status/nodes . Any tooling that relies on this will need to be updated.","title":"Key-Only Artifacts"},{"location":"key-only-artifacts/#key-only-artifacts","text":"v3.0 and after A key-only artifact is an input or output artifact where you only specify the key, omitting the bucket, secrets etc. When these are omitted, the bucket/secrets from the configured artifact repository is used. This allows you to move the configuration of the artifact repository out of the workflow specification. This is closely related to artifact repository ref . You'll want to use them together for maximum benefit. This should probably be your default if you're using v3.0: Reduces the size of workflows (improved performance). User owned artifact repository set-up configuration (simplified management). Decouples the artifact location configuration from the workflow. Allowing you to re-configure the artifact repository without changing your workflows or templates. Example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : key-only-artifacts- spec : entrypoint : main templates : - name : main dag : tasks : - name : generate template : generate - name : consume template : consume dependencies : - generate - name : generate container : image : argoproj/argosay:v2 args : [ echo , hello , /mnt/file ] outputs : artifacts : - name : file path : /mnt/file s3 : key : my-file - name : consume container : image : argoproj/argosay:v2 args : [ cat , /tmp/file ] inputs : artifacts : - name : file path : /tmp/file s3 : key : my-file Warning The location data is not longer stored in /status/nodes . Any tooling that relies on this will need to be updated.","title":"Key-Only Artifacts"},{"location":"kubectl/","text":"kubectl \u00b6 You can also create Workflows directly with kubectl . However, the Argo CLI offers extra features that kubectl does not, such as YAML validation, workflow visualization, parameter passing, retries and resubmits, suspend and resume, and more. kubectl create -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml kubectl get wf -n argo kubectl get wf hello-world-xxx -n argo kubectl get po -n argo --selector = workflows.argoproj.io/workflow = hello-world-xxx kubectl logs hello-world-yyy -c main -n argo","title":"kubectl"},{"location":"kubectl/#kubectl","text":"You can also create Workflows directly with kubectl . However, the Argo CLI offers extra features that kubectl does not, such as YAML validation, workflow visualization, parameter passing, retries and resubmits, suspend and resume, and more. kubectl create -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml kubectl get wf -n argo kubectl get wf hello-world-xxx -n argo kubectl get po -n argo --selector = workflows.argoproj.io/workflow = hello-world-xxx kubectl logs hello-world-yyy -c main -n argo","title":"kubectl"},{"location":"lifecyclehook/","text":"Lifecycle-Hook \u00b6 v3.3 and after Introduction \u00b6 A LifecycleHook triggers an action based on a conditional expression or on completion of a step or template. It is configured either at the workflow-level or template-level, for instance as a function of the workflow.status or steps.status , respectively. A LifecycleHook executes during execution time and executes once. It will execute in parallel to its step or template once the expression is satisfied. In other words, a LifecycleHook functions like an exit handler with a conditional expression. You must not name a LifecycleHook exit or it becomes an exit handler; otherwise the hook name has no relevance. Workflow-level LifecycleHook : Executes the template when a configured expression is met during the workflow. Workflow-level Lifecycle-Hook example Template-level Lifecycle-Hook : Executes the template when a configured expression is met during the step in which it is defined. Template-level Lifecycle-Hook example Supported conditions \u00b6 Exit handler variables : workflow.status and workflow.failures template templateRef arguments Unsupported conditions \u00b6 outputs are not usable since LifecycleHook executes during execution time and outputs are not produced until the step is completed. You can use outputs from previous steps, just not the one you're hooking into. If you'd like to use outputs create an exit handler instead - all the status variable are available there so you can still conditionally decide what to do. Notification use case \u00b6 A LifecycleHook can be used to configure a notification depending on a workflow status change or template status change, like the example below: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : lifecycle-hook- spec : entrypoint : main hooks : exit : template : http running : expression : workflow.status == \"Running\" template : http templates : - name : main steps : - - name : step1 template : heads - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : http http : url : http://dummy.restapiexample.com/api/v1/employees Put differently, an exit handler is like a workflow-level LifecycleHook with an expression of workflow.status == \"Succeeded\" or workflow.status == \"Failed\" or workflow.status == \"Error\" .","title":"Lifecycle-Hook"},{"location":"lifecyclehook/#lifecycle-hook","text":"v3.3 and after","title":"Lifecycle-Hook"},{"location":"lifecyclehook/#introduction","text":"A LifecycleHook triggers an action based on a conditional expression or on completion of a step or template. It is configured either at the workflow-level or template-level, for instance as a function of the workflow.status or steps.status , respectively. A LifecycleHook executes during execution time and executes once. It will execute in parallel to its step or template once the expression is satisfied. In other words, a LifecycleHook functions like an exit handler with a conditional expression. You must not name a LifecycleHook exit or it becomes an exit handler; otherwise the hook name has no relevance. Workflow-level LifecycleHook : Executes the template when a configured expression is met during the workflow. Workflow-level Lifecycle-Hook example Template-level Lifecycle-Hook : Executes the template when a configured expression is met during the step in which it is defined. Template-level Lifecycle-Hook example","title":"Introduction"},{"location":"lifecyclehook/#supported-conditions","text":"Exit handler variables : workflow.status and workflow.failures template templateRef arguments","title":"Supported conditions"},{"location":"lifecyclehook/#unsupported-conditions","text":"outputs are not usable since LifecycleHook executes during execution time and outputs are not produced until the step is completed. You can use outputs from previous steps, just not the one you're hooking into. If you'd like to use outputs create an exit handler instead - all the status variable are available there so you can still conditionally decide what to do.","title":"Unsupported conditions"},{"location":"lifecyclehook/#notification-use-case","text":"A LifecycleHook can be used to configure a notification depending on a workflow status change or template status change, like the example below: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : lifecycle-hook- spec : entrypoint : main hooks : exit : template : http running : expression : workflow.status == \"Running\" template : http templates : - name : main steps : - - name : step1 template : heads - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : http http : url : http://dummy.restapiexample.com/api/v1/employees Put differently, an exit handler is like a workflow-level LifecycleHook with an expression of workflow.status == \"Succeeded\" or workflow.status == \"Failed\" or workflow.status == \"Error\" .","title":"Notification use case"},{"location":"links/","text":"Links \u00b6 v2.7 and after You can configure Argo Server to show custom links: A \"Get Help\" button in the bottom right of the window linking to you organization help pages or chat room. Deep-links to your facilities (e.g. logging facility) in the UI for both the workflow and each workflow pod. Adds a button to the top of workflow view to navigate to customized views. Links can contain placeholder variables. Placeholder variables are indicated by the dollar sign and curly braces: ${variable} . These are the commonly used variables: ${metadata.namespace} : Kubernetes namespace of the current workflow / pod / event source / sensor ${metadata.name} : Name of the current workflow / pod / event source / sensor ${status.startedAt} : Start time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z ${status.finishedAt} : End time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z . If the workflow/pod is still running, this variable will be null See workflow-controller-configmap.yaml for a complete example v3.1 and after Epoch time-stamps are available now. These are useful if we want to add links to logging facilities like Grafana or DataDog , as they support Unix epoch time-stamp formats as URL parameters: ${status.startedAtEpoch} : Start time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . ${status.finishedAtEpoch} : End time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . If the workflow/pod is still running, this variable will represent the current time. v3.1 and after In addition to the above variables, we can now access all workflow fields under ${workflow} . For example, one may find it useful to define a custom label in the workflow and access it by ${workflow.metadata.labels.custom_label_name} We can also access workflow fields in a pod link. For example, ${workflow.metadata.name} returns the name of the workflow instead of the name of the pod. If the field doesn't exist on the workflow then the value will be an empty string.","title":"Links"},{"location":"links/#links","text":"v2.7 and after You can configure Argo Server to show custom links: A \"Get Help\" button in the bottom right of the window linking to you organization help pages or chat room. Deep-links to your facilities (e.g. logging facility) in the UI for both the workflow and each workflow pod. Adds a button to the top of workflow view to navigate to customized views. Links can contain placeholder variables. Placeholder variables are indicated by the dollar sign and curly braces: ${variable} . These are the commonly used variables: ${metadata.namespace} : Kubernetes namespace of the current workflow / pod / event source / sensor ${metadata.name} : Name of the current workflow / pod / event source / sensor ${status.startedAt} : Start time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z ${status.finishedAt} : End time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z . If the workflow/pod is still running, this variable will be null See workflow-controller-configmap.yaml for a complete example v3.1 and after Epoch time-stamps are available now. These are useful if we want to add links to logging facilities like Grafana or DataDog , as they support Unix epoch time-stamp formats as URL parameters: ${status.startedAtEpoch} : Start time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . ${status.finishedAtEpoch} : End time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds , e.g. 1609497000000 . If the workflow/pod is still running, this variable will represent the current time. v3.1 and after In addition to the above variables, we can now access all workflow fields under ${workflow} . For example, one may find it useful to define a custom label in the workflow and access it by ${workflow.metadata.labels.custom_label_name} We can also access workflow fields in a pod link. For example, ${workflow.metadata.name} returns the name of the workflow instead of the name of the pod. If the field doesn't exist on the workflow then the value will be an empty string.","title":"Links"},{"location":"managed-namespace/","text":"Managed Namespace \u00b6 v2.5 and after You can install Argo in either namespace scoped or cluster scoped configurations. The main difference is whether you install Roles or ClusterRoles, respectively. In namespace scoped configuration, you must run both the Workflow Controller and Argo Server using --namespaced . If you want to run workflows in a separate namespace, add --managed-namespace as well. (In cluster scoped configuration, don't include --namespaced or --managed-namespace .) For example: - args : - --configmap - workflow-controller-configmap - --executor-image - argoproj/workflow-controller:v2.5.1 - --namespaced - --managed-namespace - default Please note that both cluster scoped and namespace scoped configurations require \"admin\" roles to install because Argo's Custom Resource Definitions (CRDs) must be created (CRDs are cluster scoped objects). Example Use Case You can use a managed namespace install if you want some users or services to run Workflows without granting them privileges in the namespace where Argo Workflows is installed. For example, if you only run CI/CD Workflows that are maintained by the same team that manages the Argo Workflows installation, you may want a namespace install. But if all the Workflows are run by a separate data science team, you may want to give them a \"data-science-workflows\" namespace and use a managed namespace install of Argo Workflows in another namespace.","title":"Managed Namespace"},{"location":"managed-namespace/#managed-namespace","text":"v2.5 and after You can install Argo in either namespace scoped or cluster scoped configurations. The main difference is whether you install Roles or ClusterRoles, respectively. In namespace scoped configuration, you must run both the Workflow Controller and Argo Server using --namespaced . If you want to run workflows in a separate namespace, add --managed-namespace as well. (In cluster scoped configuration, don't include --namespaced or --managed-namespace .) For example: - args : - --configmap - workflow-controller-configmap - --executor-image - argoproj/workflow-controller:v2.5.1 - --namespaced - --managed-namespace - default Please note that both cluster scoped and namespace scoped configurations require \"admin\" roles to install because Argo's Custom Resource Definitions (CRDs) must be created (CRDs are cluster scoped objects). Example Use Case You can use a managed namespace install if you want some users or services to run Workflows without granting them privileges in the namespace where Argo Workflows is installed. For example, if you only run CI/CD Workflows that are maintained by the same team that manages the Argo Workflows installation, you may want a namespace install. But if all the Workflows are run by a separate data science team, you may want to give them a \"data-science-workflows\" namespace and use a managed namespace install of Argo Workflows in another namespace.","title":"Managed Namespace"},{"location":"manually-create-secrets/","text":"Service Account Secrets \u00b6 As of Kubernetes v1.24, secrets are no longer automatically created for service accounts. You must create a secret manually . You must also make the secret discoverable. You have two options: Option 1 - Discovery By Name \u00b6 Name your secret ${serviceAccountName}.service-account-token : apiVersion : v1 kind : Secret metadata : name : default.service-account-token annotations : kubernetes.io/service-account.name : default type : kubernetes.io/service-account-token This option is simpler than option 2, as you can create the secret and make it discoverable by name at the same time. Option 2 - Discovery By Annotation \u00b6 Annotate the service account with the secret name: apiVersion : v1 kind : ServiceAccount metadata : name : default annotations : workflows.argoproj.io/service-account-token.name : my-token This option is useful when the secret already exists, or the service account has a very long name.","title":"Service Account Secrets"},{"location":"manually-create-secrets/#service-account-secrets","text":"As of Kubernetes v1.24, secrets are no longer automatically created for service accounts. You must create a secret manually . You must also make the secret discoverable. You have two options:","title":"Service Account Secrets"},{"location":"manually-create-secrets/#option-1-discovery-by-name","text":"Name your secret ${serviceAccountName}.service-account-token : apiVersion : v1 kind : Secret metadata : name : default.service-account-token annotations : kubernetes.io/service-account.name : default type : kubernetes.io/service-account-token This option is simpler than option 2, as you can create the secret and make it discoverable by name at the same time.","title":"Option 1 - Discovery By Name"},{"location":"manually-create-secrets/#option-2-discovery-by-annotation","text":"Annotate the service account with the secret name: apiVersion : v1 kind : ServiceAccount metadata : name : default annotations : workflows.argoproj.io/service-account-token.name : my-token This option is useful when the secret already exists, or the service account has a very long name.","title":"Option 2 - Discovery By Annotation"},{"location":"memoization/","text":"Step Level Memoization \u00b6 v2.10 and after Introduction \u00b6 Workflows often have outputs that are expensive to compute. Memoization reduces cost and workflow execution time by recording the result of previously run steps: it stores the outputs of a template into a specified cache with a variable key. Prior to version 3.5 memoization only works for steps which have outputs, if you attempt to use it on steps which do not it should not work (there are some cases where it does, but they shouldn't). It was designed for 'pure' steps, where the purpose of running the step is to calculate some outputs based upon the step's inputs, and only the inputs. Pure steps should not interact with the outside world, but workflows won't enforce this on you. If you are using workflows prior to version 3.5 you should look at the work avoidance technique instead of memoization if your steps don't have outputs. In version 3.5 or later all steps can be memoized, whether or not they have outputs. Cache Method \u00b6 Currently, the cached data is stored in config-maps. This allows you to easily manipulate cache entries manually through kubectl and the Kubernetes API without having to go through Argo. All cache config-maps must have the label workflows.argoproj.io/configmap-type: Cache to be used as a cache. This prevents accidental access to other important config-maps in the system Using Memoization \u00b6 Memoization is set at the template level. You must specify a key , which can be static strings but more often depend on inputs. You must also specify a name for the config-map cache. Optionally you can set a maxAge in seconds or hours (e.g. 180s , 24h ) to define how long should it be considered valid. If an entry is older than the maxAge , it will be ignored. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : memoized-workflow- spec : entrypoint : whalesay templates : - name : whalesay memoize : key : \"{{inputs.parameters.message}}\" maxAge : \"10s\" cache : configMap : name : whalesay-cache Find a simple example for memoization here . Note In order to use memoization it is necessary to add the verbs create and update to the configmaps resource for the appropriate (cluster) roles. In the case of a cluster install the argo-cluster-role cluster role should be updated, whilst for a namespace install the argo-role role should be updated. FAQ \u00b6 If you see errors like error creating cache entry: ConfigMap \\\"reuse-task\\\" is invalid: []: Too long: must have at most 1048576 characters , this is due to the 1MB limit placed on the size of ConfigMap . Here are a couple of ways that might help resolve this: Delete the existing ConfigMap cache or switch to use a different cache. Reduce the size of the output parameters for the nodes that are being memoized. Split your cache into different memoization keys and cache names so that each cache entry is small. My step isn't getting memoized, why not? If you are running workflows <3.5 ensure that you have specified at least one output on the step.","title":"Step Level Memoization"},{"location":"memoization/#step-level-memoization","text":"v2.10 and after","title":"Step Level Memoization"},{"location":"memoization/#introduction","text":"Workflows often have outputs that are expensive to compute. Memoization reduces cost and workflow execution time by recording the result of previously run steps: it stores the outputs of a template into a specified cache with a variable key. Prior to version 3.5 memoization only works for steps which have outputs, if you attempt to use it on steps which do not it should not work (there are some cases where it does, but they shouldn't). It was designed for 'pure' steps, where the purpose of running the step is to calculate some outputs based upon the step's inputs, and only the inputs. Pure steps should not interact with the outside world, but workflows won't enforce this on you. If you are using workflows prior to version 3.5 you should look at the work avoidance technique instead of memoization if your steps don't have outputs. In version 3.5 or later all steps can be memoized, whether or not they have outputs.","title":"Introduction"},{"location":"memoization/#cache-method","text":"Currently, the cached data is stored in config-maps. This allows you to easily manipulate cache entries manually through kubectl and the Kubernetes API without having to go through Argo. All cache config-maps must have the label workflows.argoproj.io/configmap-type: Cache to be used as a cache. This prevents accidental access to other important config-maps in the system","title":"Cache Method"},{"location":"memoization/#using-memoization","text":"Memoization is set at the template level. You must specify a key , which can be static strings but more often depend on inputs. You must also specify a name for the config-map cache. Optionally you can set a maxAge in seconds or hours (e.g. 180s , 24h ) to define how long should it be considered valid. If an entry is older than the maxAge , it will be ignored. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : memoized-workflow- spec : entrypoint : whalesay templates : - name : whalesay memoize : key : \"{{inputs.parameters.message}}\" maxAge : \"10s\" cache : configMap : name : whalesay-cache Find a simple example for memoization here . Note In order to use memoization it is necessary to add the verbs create and update to the configmaps resource for the appropriate (cluster) roles. In the case of a cluster install the argo-cluster-role cluster role should be updated, whilst for a namespace install the argo-role role should be updated.","title":"Using Memoization"},{"location":"memoization/#faq","text":"If you see errors like error creating cache entry: ConfigMap \\\"reuse-task\\\" is invalid: []: Too long: must have at most 1048576 characters , this is due to the 1MB limit placed on the size of ConfigMap . Here are a couple of ways that might help resolve this: Delete the existing ConfigMap cache or switch to use a different cache. Reduce the size of the output parameters for the nodes that are being memoized. Split your cache into different memoization keys and cache names so that each cache entry is small. My step isn't getting memoized, why not? If you are running workflows <3.5 ensure that you have specified at least one output on the step.","title":"FAQ"},{"location":"metrics/","text":"Prometheus Metrics \u00b6 v2.7 and after Introduction \u00b6 Argo emits a certain number of controller metrics that inform on the state of the controller at any given time. Furthermore, users can also define their own custom metrics to inform on the state of their Workflows. Custom Prometheus metrics can be defined to be emitted on a Workflow - and Template -level basis. These can be useful for many cases; some examples: Keeping track of the duration of a Workflow or Template over time, and setting an alert if it goes beyond a threshold Keeping track of the number of times a Workflow or Template fails over time Reporting an important internal metric, such as a model training score or an internal error rate Emitting custom metrics with Argo is easy, but it's important to understand what makes a good Prometheus metric and the best way to define metrics in Argo to avoid problems such as cardinality explosion . Metrics and metrics in Argo \u00b6 There are two kinds of metrics emitted by Argo: controller metrics and custom metrics . Controller metrics \u00b6 Metrics that inform on the state of the controller; i.e., they answer the question \"What is the state of the controller right now?\" Default controller metrics can be scraped from service workflow-controller-metrics at the endpoint :9090/metrics Custom metrics \u00b6 Metrics that inform on the state of a Workflow, or a series of Workflows. These custom metrics are defined by the user in the Workflow spec. Emitting custom metrics is the responsibility of the emitter owner. Since the user defines Workflows in Argo, the user is responsible for emitting metrics correctly. What is and isn't a Prometheus metric \u00b6 Prometheus metrics should be thought of as ephemeral data points of running processes; i.e., they are the answer to the question \"What is the state of my system right now ?\". Metrics should report things such as: a counter of the number of times a workflow or steps has failed, or a gauge of workflow duration, or an average of an internal metric such as a model training score or error rate. Metrics are then routinely scraped and stored and -- when they are correctly designed -- they can represent time series. Aggregating the examples above over time could answer useful questions such as: How has the error rate of this workflow or step changed over time? How has the duration of this workflow changed over time? Is the current workflow running for too long? Is our model improving over time? Prometheus metrics should not be thought of as a store of data. Since metrics should only report the state of the system at the current time, they should not be used to report historical data such as: the status of an individual instance of a workflow, or how long a particular instance of a step took to run. Metrics are also ephemeral, meaning there is no guarantee that they will be persisted for any amount of time. If you need a way to view and analyze historical data, consider the workflow archive or reporting to logs. Default Controller Metrics \u00b6 Metrics for the Four Golden Signals are: Latency: argo_workflows_queue_latency Traffic: argo_workflows_count and argo_workflows_queue_depth_count Errors: argo_workflows_count and argo_workflows_error_count Saturation: argo_workflows_workers_busy and argo_workflows_workflow_condition argo_pod_missing \u00b6 Pods were not seen. E.g. by being deleted by Kubernetes. You should only see this under high load. Note This metric's name starts with argo_ not argo_workflows_ . argo_workflows_count \u00b6 Number of workflow in each phase. The Running count does not mean that a workflows pods are running, just that the controller has scheduled them. A workflow can be stuck in Running with pending pods for a long time. argo_workflows_error_count \u00b6 A count of certain errors incurred by the controller. argo_workflows_k8s_request_total \u00b6 Number of API requests sent to the Kubernetes API. argo_workflows_operation_duration_seconds \u00b6 A histogram of durations of operations. An operation is a single workflow reconciliation loop within the workflow-controller. It's the time for the controller to process a single workflow after it has been read from the cluster and is a measure of the performance of the controller affected by the complexity of the workflow. argo_workflows_pods_count \u00b6 It is possible for a workflow to start, but no pods be running (e.g. cluster is too busy to run them). This metric sheds light on actual work being done. argo_workflows_queue_adds_count \u00b6 The number of additions to the queue of workflows or cron workflows. argo_workflows_queue_depth_count \u00b6 The depth of the queue of workflows or cron workflows to be processed by the controller. argo_workflows_queue_latency \u00b6 The time workflows or cron workflows spend in the queue waiting to be processed. argo_workflows_workers_busy \u00b6 The number of workers that are busy. argo_workflows_workflow_condition \u00b6 The number of workflow with different conditions. This will tell you the number of workflows with running pods. argo_workflows_workflows_processed_count \u00b6 A count of all Workflow updates processed by the controller. Metric types \u00b6 Please see the Prometheus docs on metric types . How metrics work in Argo \u00b6 In order to analyze the behavior of a workflow over time, we need to be able to link different instances (i.e. individual executions) of a workflow together into a \"series\" for the purposes of emitting metrics. We do so by linking them together with the same metric descriptor. In Prometheus, a metric descriptor is defined as a metric's name and its key-value labels. For example, for a metric tracking the duration of model execution over time, a metric descriptor could be: argo_workflows_model_exec_time{model_name=\"model_a\",phase=\"validation\"} This metric then represents the amount of time that \"Model A\" took to train in the phase \"Validation\". It is important to understand that the metric name and its labels form the descriptor: argo_workflows_model_exec_time{model_name=\"model_b\",phase=\"validation\"} is a different metric (and will track a different \"series\" altogether). Now, whenever we run our first workflow that validates \"Model A\" a metric with the amount of time it took it to do so will be created and emitted. For each subsequent time that this happens, no new metrics will be emitted and the same metric will be updated with the new value. Since, in effect, we are interested on the execution time of \"validation\" of \"Model A\" over time, we are no longer interested in the previous metric and can assume it has already been scraped. In summary, whenever you want to track a particular metric over time, you should use the same metric name and metric labels wherever it is emitted. This is how these metrics are \"linked\" as belonging to the same series. Grafana Dashboard for Argo Controller Metrics \u00b6 Please see the Argo Workflows metrics Grafana dashboard. Defining metrics \u00b6 Metrics are defined in-place on the Workflow/Step/Task where they are emitted from. Metrics are always processed after the Workflow/Step/Task completes, with the exception of real-time metrics . Metric definitions must include a name and a help doc string. They can also include any number of labels (when defining labels avoid cardinality explosion). Metrics with the same name must always use the same exact help string, having different metrics with the same name, but with a different help string will cause an error (this is a Prometheus requirement). All metrics can also be conditionally emitted by defining a when clause. This when clause works the same as elsewhere in a workflow. A metric must also have a type, it can be one of gauge , histogram , and counter ( see below ). Within the metric type a value must be specified. This value can be either a literal value of be an Argo variable . When defining a histogram , buckets must also be provided (see below). Argo variables can be included anywhere in the metric spec, such as in labels , name , help , when , etc. Metric names can only contain alphanumeric characters, _ , and : . Metric Spec \u00b6 In Argo you can define a metric on the Workflow level or on the Template level. Here is an example of a Workflow level Gauge metric that will report the Workflow duration time: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : model-training- spec : entrypoint : steps metrics : prometheus : - name : exec_duration_gauge # Metric name (will be prepended with \"argo_workflows_\") labels : # Labels are optional. Avoid cardinality explosion. - key : name value : model_a help : \"Duration gauge by name\" # A help doc describing your metric. This is required. gauge : # The metric type. Available are \"gauge\", \"histogram\", and \"counter\". value : \"{{workflow.duration}}\" # The value of your metric. It could be an Argo variable (see variables doc) or a literal value ... An example of a Template -level Counter metric that will increase a counter every time the step fails: ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey when : \"{{status}} == Failed\" # Emit the metric conditionally. Works the same as normal \"when\" counter : value : \"1\" # This increments the counter by 1 container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... A similar example of such a Counter metric that will increase for every step status ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey - key : status value : \"{{status}}\" # Argo variable in `labels` counter : value : \"1\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... Finally, an example of a Template -level Histogram metric that tracks an internal value: ... templates : - name : random-int metrics : prometheus : - name : random_int_step_histogram help : \"Value of the int emitted by random-int at step level\" when : \"{{status}} == Succeeded\" # Only emit metric when step succeeds histogram : buckets : # Bins must be defined for histogram metrics - 2.01 # and are part of the metric descriptor. - 4.01 # All metrics in this series MUST have the - 6.01 # same buckets. - 8.01 - 10.01 value : \"{{outputs.parameters.rand-int-value}}\" # References itself for its output (see variables doc) outputs : parameters : - name : rand-int-value globalName : rand-int-value valueFrom : path : /tmp/rand_int.txt container : image : alpine:latest command : [ sh , -c ] args : [ \"RAND_INT=$((1 + RANDOM % 10)); echo $RAND_INT; echo $RAND_INT > /tmp/rand_int.txt\" ] ... Real-Time Metrics \u00b6 Argo supports a limited number of real-time metrics. These metrics are emitted in real-time, beginning when the step execution starts and ending when it completes. Real-time metrics are only available on Gauge type metrics and with a limited number of variables . To define a real-time metric simply add realtime: true to a gauge metric with a valid real-time variable. For example: gauge : realtime : true value : \"{{duration}}\" Metrics endpoint \u00b6 By default, metrics are emitted by the workflow-controller on port 9090 on the /metrics path. By port-forwarding to the pod you can view the metrics in your browser at http://localhost:9090/metrics : kubectl -n argo port-forward deploy/workflow-controller 9090:9090 A metrics service is not installed as part of the default installation so you will need to add one if you wish to use a Prometheus Service Monitor: cat <:9090/metrics","title":"Controller metrics"},{"location":"metrics/#custom-metrics","text":"Metrics that inform on the state of a Workflow, or a series of Workflows. These custom metrics are defined by the user in the Workflow spec. Emitting custom metrics is the responsibility of the emitter owner. Since the user defines Workflows in Argo, the user is responsible for emitting metrics correctly.","title":"Custom metrics"},{"location":"metrics/#what-is-and-isnt-a-prometheus-metric","text":"Prometheus metrics should be thought of as ephemeral data points of running processes; i.e., they are the answer to the question \"What is the state of my system right now ?\". Metrics should report things such as: a counter of the number of times a workflow or steps has failed, or a gauge of workflow duration, or an average of an internal metric such as a model training score or error rate. Metrics are then routinely scraped and stored and -- when they are correctly designed -- they can represent time series. Aggregating the examples above over time could answer useful questions such as: How has the error rate of this workflow or step changed over time? How has the duration of this workflow changed over time? Is the current workflow running for too long? Is our model improving over time? Prometheus metrics should not be thought of as a store of data. Since metrics should only report the state of the system at the current time, they should not be used to report historical data such as: the status of an individual instance of a workflow, or how long a particular instance of a step took to run. Metrics are also ephemeral, meaning there is no guarantee that they will be persisted for any amount of time. If you need a way to view and analyze historical data, consider the workflow archive or reporting to logs.","title":"What is and isn't a Prometheus metric"},{"location":"metrics/#default-controller-metrics","text":"Metrics for the Four Golden Signals are: Latency: argo_workflows_queue_latency Traffic: argo_workflows_count and argo_workflows_queue_depth_count Errors: argo_workflows_count and argo_workflows_error_count Saturation: argo_workflows_workers_busy and argo_workflows_workflow_condition","title":"Default Controller Metrics"},{"location":"metrics/#argo_pod_missing","text":"Pods were not seen. E.g. by being deleted by Kubernetes. You should only see this under high load. Note This metric's name starts with argo_ not argo_workflows_ .","title":"argo_pod_missing"},{"location":"metrics/#argo_workflows_count","text":"Number of workflow in each phase. The Running count does not mean that a workflows pods are running, just that the controller has scheduled them. A workflow can be stuck in Running with pending pods for a long time.","title":"argo_workflows_count"},{"location":"metrics/#argo_workflows_error_count","text":"A count of certain errors incurred by the controller.","title":"argo_workflows_error_count"},{"location":"metrics/#argo_workflows_k8s_request_total","text":"Number of API requests sent to the Kubernetes API.","title":"argo_workflows_k8s_request_total"},{"location":"metrics/#argo_workflows_operation_duration_seconds","text":"A histogram of durations of operations. An operation is a single workflow reconciliation loop within the workflow-controller. It's the time for the controller to process a single workflow after it has been read from the cluster and is a measure of the performance of the controller affected by the complexity of the workflow.","title":"argo_workflows_operation_duration_seconds"},{"location":"metrics/#argo_workflows_pods_count","text":"It is possible for a workflow to start, but no pods be running (e.g. cluster is too busy to run them). This metric sheds light on actual work being done.","title":"argo_workflows_pods_count"},{"location":"metrics/#argo_workflows_queue_adds_count","text":"The number of additions to the queue of workflows or cron workflows.","title":"argo_workflows_queue_adds_count"},{"location":"metrics/#argo_workflows_queue_depth_count","text":"The depth of the queue of workflows or cron workflows to be processed by the controller.","title":"argo_workflows_queue_depth_count"},{"location":"metrics/#argo_workflows_queue_latency","text":"The time workflows or cron workflows spend in the queue waiting to be processed.","title":"argo_workflows_queue_latency"},{"location":"metrics/#argo_workflows_workers_busy","text":"The number of workers that are busy.","title":"argo_workflows_workers_busy"},{"location":"metrics/#argo_workflows_workflow_condition","text":"The number of workflow with different conditions. This will tell you the number of workflows with running pods.","title":"argo_workflows_workflow_condition"},{"location":"metrics/#argo_workflows_workflows_processed_count","text":"A count of all Workflow updates processed by the controller.","title":"argo_workflows_workflows_processed_count"},{"location":"metrics/#metric-types","text":"Please see the Prometheus docs on metric types .","title":"Metric types"},{"location":"metrics/#how-metrics-work-in-argo","text":"In order to analyze the behavior of a workflow over time, we need to be able to link different instances (i.e. individual executions) of a workflow together into a \"series\" for the purposes of emitting metrics. We do so by linking them together with the same metric descriptor. In Prometheus, a metric descriptor is defined as a metric's name and its key-value labels. For example, for a metric tracking the duration of model execution over time, a metric descriptor could be: argo_workflows_model_exec_time{model_name=\"model_a\",phase=\"validation\"} This metric then represents the amount of time that \"Model A\" took to train in the phase \"Validation\". It is important to understand that the metric name and its labels form the descriptor: argo_workflows_model_exec_time{model_name=\"model_b\",phase=\"validation\"} is a different metric (and will track a different \"series\" altogether). Now, whenever we run our first workflow that validates \"Model A\" a metric with the amount of time it took it to do so will be created and emitted. For each subsequent time that this happens, no new metrics will be emitted and the same metric will be updated with the new value. Since, in effect, we are interested on the execution time of \"validation\" of \"Model A\" over time, we are no longer interested in the previous metric and can assume it has already been scraped. In summary, whenever you want to track a particular metric over time, you should use the same metric name and metric labels wherever it is emitted. This is how these metrics are \"linked\" as belonging to the same series.","title":"How metrics work in Argo"},{"location":"metrics/#grafana-dashboard-for-argo-controller-metrics","text":"Please see the Argo Workflows metrics Grafana dashboard.","title":"Grafana Dashboard for Argo Controller Metrics"},{"location":"metrics/#defining-metrics","text":"Metrics are defined in-place on the Workflow/Step/Task where they are emitted from. Metrics are always processed after the Workflow/Step/Task completes, with the exception of real-time metrics . Metric definitions must include a name and a help doc string. They can also include any number of labels (when defining labels avoid cardinality explosion). Metrics with the same name must always use the same exact help string, having different metrics with the same name, but with a different help string will cause an error (this is a Prometheus requirement). All metrics can also be conditionally emitted by defining a when clause. This when clause works the same as elsewhere in a workflow. A metric must also have a type, it can be one of gauge , histogram , and counter ( see below ). Within the metric type a value must be specified. This value can be either a literal value of be an Argo variable . When defining a histogram , buckets must also be provided (see below). Argo variables can be included anywhere in the metric spec, such as in labels , name , help , when , etc. Metric names can only contain alphanumeric characters, _ , and : .","title":"Defining metrics"},{"location":"metrics/#metric-spec","text":"In Argo you can define a metric on the Workflow level or on the Template level. Here is an example of a Workflow level Gauge metric that will report the Workflow duration time: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : model-training- spec : entrypoint : steps metrics : prometheus : - name : exec_duration_gauge # Metric name (will be prepended with \"argo_workflows_\") labels : # Labels are optional. Avoid cardinality explosion. - key : name value : model_a help : \"Duration gauge by name\" # A help doc describing your metric. This is required. gauge : # The metric type. Available are \"gauge\", \"histogram\", and \"counter\". value : \"{{workflow.duration}}\" # The value of your metric. It could be an Argo variable (see variables doc) or a literal value ... An example of a Template -level Counter metric that will increase a counter every time the step fails: ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey when : \"{{status}} == Failed\" # Emit the metric conditionally. Works the same as normal \"when\" counter : value : \"1\" # This increments the counter by 1 container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... A similar example of such a Counter metric that will increase for every step status ... templates : - name : flakey metrics : prometheus : - name : result_counter help : \"Count of step execution by result status\" labels : - key : name value : flakey - key : status value : \"{{status}}\" # Argo variable in `labels` counter : value : \"1\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] ... Finally, an example of a Template -level Histogram metric that tracks an internal value: ... templates : - name : random-int metrics : prometheus : - name : random_int_step_histogram help : \"Value of the int emitted by random-int at step level\" when : \"{{status}} == Succeeded\" # Only emit metric when step succeeds histogram : buckets : # Bins must be defined for histogram metrics - 2.01 # and are part of the metric descriptor. - 4.01 # All metrics in this series MUST have the - 6.01 # same buckets. - 8.01 - 10.01 value : \"{{outputs.parameters.rand-int-value}}\" # References itself for its output (see variables doc) outputs : parameters : - name : rand-int-value globalName : rand-int-value valueFrom : path : /tmp/rand_int.txt container : image : alpine:latest command : [ sh , -c ] args : [ \"RAND_INT=$((1 + RANDOM % 10)); echo $RAND_INT; echo $RAND_INT > /tmp/rand_int.txt\" ] ...","title":"Metric Spec"},{"location":"metrics/#real-time-metrics","text":"Argo supports a limited number of real-time metrics. These metrics are emitted in real-time, beginning when the step execution starts and ending when it completes. Real-time metrics are only available on Gauge type metrics and with a limited number of variables . To define a real-time metric simply add realtime: true to a gauge metric with a valid real-time variable. For example: gauge : realtime : true value : \"{{duration}}\"","title":"Real-Time Metrics"},{"location":"metrics/#metrics-endpoint","text":"By default, metrics are emitted by the workflow-controller on port 9090 on the /metrics path. By port-forwarding to the pod you can view the metrics in your browser at http://localhost:9090/metrics : kubectl -n argo port-forward deploy/workflow-controller 9090:9090 A metrics service is not installed as part of the default installation so you will need to add one if you wish to use a Prometheus Service Monitor: cat <.value The value of input parameter NAME The operator can be '=' or '!='. Multiple selectors can be combined with a comma, in which case they are anded together. Examples \u00b6 To filter for nodes where the input parameter 'foo' is equal to 'bar': --node-field-selector = inputs.parameters.foo.value = bar To filter for nodes where the input parameter 'foo' is equal to 'bar' and phase is not running: --node-field-selector = foo1 = bar1,phase! = Running Consider the following workflow: \u25cf appr-promotion-ffsv4 code-release \u251c\u2500\u2714 start sample-template/email appr-promotion-ffsv4-3704914002 2s \u251c\u2500\u25cf app1 wftempl1/approval-and-promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-524476380 2s \u2502 \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval \u251c\u2500\u2714 app2 wftempl2/promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-2580536603 2s \u2502 \u251c\u2500\u2714 pr-approval sample-template/approval appr-promotion-ffsv4-3445567645 2s \u2502 \u2514\u2500\u2714 deployment sample-template/promote appr-promotion-ffsv4-970728982 1s \u2514\u2500\u25cf app3 wftempl1/approval-and-promotion \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-388318034 2s \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval Here we have two steps with the same displayName : wait-approval . To select one to suspend, we need to use their name , either appr-promotion-ffsv4.app1.wait-approval or appr-promotion-ffsv4.app3.wait-approval . If it is not clear what the full name of a node is, it can be found using kubectl : $ kubectl get wf appr-promotion-ffsv4 -o yaml ... appr-promotion-ffsv4-3235686597: boundaryID: appr-promotion-ffsv4-3079407832 displayName: wait-approval # <- Display Name finishedAt: null id: appr-promotion-ffsv4-3235686597 name: appr-promotion-ffsv4.app1.wait-approval # <- Full Name phase: Running startedAt: \"2021-01-20T17:00:25Z\" templateRef: name: sample-template template: waiting-for-approval templateScope: namespaced/wftempl1 type: Suspend ...","title":"Node Field Selectors"},{"location":"node-field-selector/#node-field-selectors","text":"v2.8 and after","title":"Node Field Selectors"},{"location":"node-field-selector/#introduction","text":"The resume, stop and retry Argo CLI and API commands support a --node-field-selector parameter to allow the user to select a subset of nodes for the command to apply to. In the case of the resume and stop commands these are the nodes that should be resumed or stopped. In the case of the retry command it allows specifying nodes that should be restarted even if they were previously successful (and must be used in combination with --restart-successful ) The format of this when used with the CLI is: --node-field-selector = FIELD = VALUE","title":"Introduction"},{"location":"node-field-selector/#possible-options","text":"The field can be any of: Field Description displayName Display name of the node. This is the name of the node as it is displayed on the CLI or UI, without considering its ancestors (see example below). This is a useful shortcut if there is only one node with the same displayName name Full name of the node. This is the full name of the node, including its ancestors (see example below). Using name is necessary when two or more nodes share the same displayName and disambiguation is required. templateName Template name of the node phase Phase status of the node - e.g. Running templateRef.name The name of the workflow template the node is referring to templateRef.template The template within the workflow template the node is referring to inputs.parameters..value The value of input parameter NAME The operator can be '=' or '!='. Multiple selectors can be combined with a comma, in which case they are anded together.","title":"Possible options"},{"location":"node-field-selector/#examples","text":"To filter for nodes where the input parameter 'foo' is equal to 'bar': --node-field-selector = inputs.parameters.foo.value = bar To filter for nodes where the input parameter 'foo' is equal to 'bar' and phase is not running: --node-field-selector = foo1 = bar1,phase! = Running Consider the following workflow: \u25cf appr-promotion-ffsv4 code-release \u251c\u2500\u2714 start sample-template/email appr-promotion-ffsv4-3704914002 2s \u251c\u2500\u25cf app1 wftempl1/approval-and-promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-524476380 2s \u2502 \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval \u251c\u2500\u2714 app2 wftempl2/promotion \u2502 \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-2580536603 2s \u2502 \u251c\u2500\u2714 pr-approval sample-template/approval appr-promotion-ffsv4-3445567645 2s \u2502 \u2514\u2500\u2714 deployment sample-template/promote appr-promotion-ffsv4-970728982 1s \u2514\u2500\u25cf app3 wftempl1/approval-and-promotion \u251c\u2500\u2714 notification-email sample-template/email appr-promotion-ffsv4-388318034 2s \u2514\u2500\u01c1 wait-approval sample-template/waiting-for-approval Here we have two steps with the same displayName : wait-approval . To select one to suspend, we need to use their name , either appr-promotion-ffsv4.app1.wait-approval or appr-promotion-ffsv4.app3.wait-approval . If it is not clear what the full name of a node is, it can be found using kubectl : $ kubectl get wf appr-promotion-ffsv4 -o yaml ... appr-promotion-ffsv4-3235686597: boundaryID: appr-promotion-ffsv4-3079407832 displayName: wait-approval # <- Display Name finishedAt: null id: appr-promotion-ffsv4-3235686597 name: appr-promotion-ffsv4.app1.wait-approval # <- Full Name phase: Running startedAt: \"2021-01-20T17:00:25Z\" templateRef: name: sample-template template: waiting-for-approval templateScope: namespaced/wftempl1 type: Suspend ...","title":"Examples"},{"location":"offloading-large-workflows/","text":"Offloading Large Workflows \u00b6 v2.4 and after Argo stores workflows as Kubernetes resources (i.e. within EtcD). This creates a limit to their size as resources must be under 1MB. Each resource includes the status of each node, which is stored in the /status/nodes field for the resource. This can be over 1MB. If this happens, we try and compress the node status and store it in /status/compressedNodes . If the status is still too large, we then try and store it in an SQL database. To enable this feature, configure a Postgres or MySQL database under persistence in your configuration and set nodeStatusOffLoad: true . FAQ \u00b6 Why aren't my workflows appearing in the database? \u00b6 Offloading is expensive and often unnecessary, so we only offload when we need to. Your workflows aren't probably large enough. Error Failed to submit workflow: etcdserver: request is too large. \u00b6 You must use the Argo CLI having exported export ARGO_SERVER=... . Error offload node status is not supported \u00b6 Even after compressing node statuses, the workflow exceeded the EtcD size limit. To resolve, either enable node status offload as described above or look for ways to reduce the size of your workflow manifest: Use withItems or withParams to consolidate similar templates into a single parametrized template Use template defaults to factor shared template options to the workflow level Use workflow templates to factor frequently-used templates into separate resources Use workflows of workflows to factor a large workflow into a workflow of smaller workflows","title":"Offloading Large Workflows"},{"location":"offloading-large-workflows/#offloading-large-workflows","text":"v2.4 and after Argo stores workflows as Kubernetes resources (i.e. within EtcD). This creates a limit to their size as resources must be under 1MB. Each resource includes the status of each node, which is stored in the /status/nodes field for the resource. This can be over 1MB. If this happens, we try and compress the node status and store it in /status/compressedNodes . If the status is still too large, we then try and store it in an SQL database. To enable this feature, configure a Postgres or MySQL database under persistence in your configuration and set nodeStatusOffLoad: true .","title":"Offloading Large Workflows"},{"location":"offloading-large-workflows/#faq","text":"","title":"FAQ"},{"location":"offloading-large-workflows/#why-arent-my-workflows-appearing-in-the-database","text":"Offloading is expensive and often unnecessary, so we only offload when we need to. Your workflows aren't probably large enough.","title":"Why aren't my workflows appearing in the database?"},{"location":"offloading-large-workflows/#error-failed-to-submit-workflow-etcdserver-request-is-too-large","text":"You must use the Argo CLI having exported export ARGO_SERVER=... .","title":"Error Failed to submit workflow: etcdserver: request is too large."},{"location":"offloading-large-workflows/#error-offload-node-status-is-not-supported","text":"Even after compressing node statuses, the workflow exceeded the EtcD size limit. To resolve, either enable node status offload as described above or look for ways to reduce the size of your workflow manifest: Use withItems or withParams to consolidate similar templates into a single parametrized template Use template defaults to factor shared template options to the workflow level Use workflow templates to factor frequently-used templates into separate resources Use workflows of workflows to factor a large workflow into a workflow of smaller workflows","title":"Error offload node status is not supported"},{"location":"plugin-directory/","text":"Plugin Directory \u00b6 \u26a0\ufe0f Disclaimer: We take only minimal action to verify the authenticity of plugins. Install at your own risk. Name Description Hello Hello world plugin you can use as a template Slack Example Slack plugin Argo CD Sync Argo CD apps, e.g. to use Argo as CI Volcano Job Plugin Execute Volcano Job Python Plugin for executing Python Hermes Send notifications, e.g. Slack WASM Run Web Assembly (WASM) tasks Chaos Mesh Plugin Run Chaos Mesh experiment Pull Request Build Status Send build status of pull request to Git provider Atomic Workflow Plugin Stop the workflows which comes from the same WorkflowTemplate and have the same parameters AWS Plugin Argo Workflows Executor Plugin for AWS Services, e.g. SageMaker Pipelines, Glue, etc.","title":"Plugin Directory"},{"location":"plugin-directory/#plugin-directory","text":"\u26a0\ufe0f Disclaimer: We take only minimal action to verify the authenticity of plugins. Install at your own risk. Name Description Hello Hello world plugin you can use as a template Slack Example Slack plugin Argo CD Sync Argo CD apps, e.g. to use Argo as CI Volcano Job Plugin Execute Volcano Job Python Plugin for executing Python Hermes Send notifications, e.g. Slack WASM Run Web Assembly (WASM) tasks Chaos Mesh Plugin Run Chaos Mesh experiment Pull Request Build Status Send build status of pull request to Git provider Atomic Workflow Plugin Stop the workflows which comes from the same WorkflowTemplate and have the same parameters AWS Plugin Argo Workflows Executor Plugin for AWS Services, e.g. SageMaker Pipelines, Glue, etc.","title":"Plugin Directory"},{"location":"plugins/","text":"Plugins \u00b6 Plugins allow you to extend Argo Workflows to add new capabilities. You don't need to learn Golang, you can write in any language, including Python. Simple: a plugin just responds to RPC HTTP requests. You can iterate quickly by changing the plugin at runtime. You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software release. Executor plugins can be written and installed by both users and admins.","title":"Plugins"},{"location":"plugins/#plugins","text":"Plugins allow you to extend Argo Workflows to add new capabilities. You don't need to learn Golang, you can write in any language, including Python. Simple: a plugin just responds to RPC HTTP requests. You can iterate quickly by changing the plugin at runtime. You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software release. Executor plugins can be written and installed by both users and admins.","title":"Plugins"},{"location":"progress/","text":"Workflow Progress \u00b6 v2.12 and after When you run a workflow, the controller will report on its progress. We define progress as two numbers, N/M such that 0 <= N <= M and 0 <= M . N is the number of completed tasks. M is the total number of tasks. E.g. 0/0 , 0/1 or 50/100 . Unlike estimated duration , progress is deterministic. I.e. it will be the same for each workflow, regardless of any problems. Progress for each node is calculated as follows: For a pod node either 1/1 if completed or 0/1 otherwise. For non-leaf nodes, the sum of its children. For a whole workflow's, progress is the sum of all its leaf nodes. Warning M will increase during workflow run each time a node is added to the graph. Self reporting progress \u00b6 v3.3 and after Pods in a workflow can report their own progress during their runtime. This self reported progress overrides the auto-generated progress. Reporting progress works as follows: create and write the progress to a file indicated by the env variable ARGO_PROGRESS_FILE format of the progress must be N/M The executor will read this file every 3s and if there was an update, patch the pod annotations with workflows.argoproj.io/progress: N/M . The controller picks this up and writes the progress to the appropriate Status properties. Initially the progress of a workflows' pod is always 0/1 . If you want to influence this, make sure to set an initial progress annotation on the pod: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : progress- spec : entrypoint : main templates : - name : main dag : tasks : - name : progress template : progress - name : progress metadata : annotations : workflows.argoproj.io/progress : 0/100 container : image : alpine:3.14 command : [ \"/bin/sh\" , \"-c\" ] args : - | for i in `seq 1 10`; do sleep 10; echo \"$(($i*10))\"'/100' > $ARGO_PROGRESS_FILE; done","title":"Workflow Progress"},{"location":"progress/#workflow-progress","text":"v2.12 and after When you run a workflow, the controller will report on its progress. We define progress as two numbers, N/M such that 0 <= N <= M and 0 <= M . N is the number of completed tasks. M is the total number of tasks. E.g. 0/0 , 0/1 or 50/100 . Unlike estimated duration , progress is deterministic. I.e. it will be the same for each workflow, regardless of any problems. Progress for each node is calculated as follows: For a pod node either 1/1 if completed or 0/1 otherwise. For non-leaf nodes, the sum of its children. For a whole workflow's, progress is the sum of all its leaf nodes. Warning M will increase during workflow run each time a node is added to the graph.","title":"Workflow Progress"},{"location":"progress/#self-reporting-progress","text":"v3.3 and after Pods in a workflow can report their own progress during their runtime. This self reported progress overrides the auto-generated progress. Reporting progress works as follows: create and write the progress to a file indicated by the env variable ARGO_PROGRESS_FILE format of the progress must be N/M The executor will read this file every 3s and if there was an update, patch the pod annotations with workflows.argoproj.io/progress: N/M . The controller picks this up and writes the progress to the appropriate Status properties. Initially the progress of a workflows' pod is always 0/1 . If you want to influence this, make sure to set an initial progress annotation on the pod: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : progress- spec : entrypoint : main templates : - name : main dag : tasks : - name : progress template : progress - name : progress metadata : annotations : workflows.argoproj.io/progress : 0/100 container : image : alpine:3.14 command : [ \"/bin/sh\" , \"-c\" ] args : - | for i in `seq 1 10`; do sleep 10; echo \"$(($i*10))\"'/100' > $ARGO_PROGRESS_FILE; done","title":"Self reporting progress"},{"location":"public-api/","text":"Public API \u00b6 Argo Workflows public API is defined by the following: The file api/openapi-spec/swagger.json The schema of the table argo_archived_workflows . The installation options.","title":"Public API"},{"location":"public-api/#public-api","text":"Argo Workflows public API is defined by the following: The file api/openapi-spec/swagger.json The schema of the table argo_archived_workflows . The installation options.","title":"Public API"},{"location":"quick-start/","text":"Quick Start \u00b6 To see how Argo Workflows work, you can install it and run examples of simple workflows. Before you start you need a Kubernetes cluster and kubectl set up to be able to access that cluster. For the purposes of getting up and running, a local cluster is fine. You could consider the following local Kubernetes cluster options: minikube kind k3s or k3d Docker Desktop Alternatively, if you want to try out Argo Workflows and don't want to set up a Kubernetes cluster, try the Killercoda course . Development vs. Production These instructions are intended to help you get started quickly. They are not suitable in production. For production installs, please refer to the installation documentation . Install Argo Workflows \u00b6 To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. Below is an example of the install commands, ensure that you update the command to install the correct version number: kubectl create namespace argo kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v<>/install.yaml Patch argo-server authentication \u00b6 The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token in order to authenticate. For more information, refer to the Argo Server Auth Mode documentation . We will switch the authentication mode to server so that we can bypass the UI login for now: kubectl patch deployment \\ argo-server \\ --namespace argo \\ --type = 'json' \\ -p = '[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/args\", \"value\": [ \"server\", \"--auth-mode=server\" ]}]' Port-forward the UI \u00b6 Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 This will serve the UI on https://localhost:2746 . Due to the self-signed certificate, you will receive a TLS error which you will need to manually approve. Pay close attention to the URI. It uses https and not http . Navigating to http://localhost:2746 result in server-side error that breaks the port-forwarding. Install the Argo Workflows CLI \u00b6 You can more easily interact with Argo Workflows with the Argo CLI . Submitting an example workflow \u00b6 Submit an example workflow (CLI) \u00b6 argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml The --watch flag used above will allow you to observe the workflow as it runs and the status of whether it succeeds. When the workflow completes, the watch on the workflow will stop. You can list all the Workflows you have submitted by running the command below: argo list -n argo You will notice the Workflow name has a hello-world- prefix followed by random characters. These characters are used to give Workflows unique names to help identify specific runs of a Workflow. If you submitted this Workflow again, the next Workflow run would have a different name. Using the argo get command, you can always review details of a Workflow run. The output for the command below will be the same as the information shown as when you submitted the Workflow: argo get -n argo @latest The @latest argument to the CLI is a short cut to view the latest Workflow run that was executed. You can also observe the logs of the Workflow run by running the following: argo logs -n argo @latest Submit an example workflow (GUI) \u00b6 Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 Navigate your browser to https://localhost:2746 . Click + Submit New Workflow and then Edit using full workflow options You can find an example workflow already in the text field. Press + Create to start the workflow.","title":"Quick Start"},{"location":"quick-start/#quick-start","text":"To see how Argo Workflows work, you can install it and run examples of simple workflows. Before you start you need a Kubernetes cluster and kubectl set up to be able to access that cluster. For the purposes of getting up and running, a local cluster is fine. You could consider the following local Kubernetes cluster options: minikube kind k3s or k3d Docker Desktop Alternatively, if you want to try out Argo Workflows and don't want to set up a Kubernetes cluster, try the Killercoda course . Development vs. Production These instructions are intended to help you get started quickly. They are not suitable in production. For production installs, please refer to the installation documentation .","title":"Quick Start"},{"location":"quick-start/#install-argo-workflows","text":"To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands. Below is an example of the install commands, ensure that you update the command to install the correct version number: kubectl create namespace argo kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v<>/install.yaml","title":"Install Argo Workflows"},{"location":"quick-start/#patch-argo-server-authentication","text":"The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token in order to authenticate. For more information, refer to the Argo Server Auth Mode documentation . We will switch the authentication mode to server so that we can bypass the UI login for now: kubectl patch deployment \\ argo-server \\ --namespace argo \\ --type = 'json' \\ -p = '[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/args\", \"value\": [ \"server\", \"--auth-mode=server\" ]}]'","title":"Patch argo-server authentication"},{"location":"quick-start/#port-forward-the-ui","text":"Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 This will serve the UI on https://localhost:2746 . Due to the self-signed certificate, you will receive a TLS error which you will need to manually approve. Pay close attention to the URI. It uses https and not http . Navigating to http://localhost:2746 result in server-side error that breaks the port-forwarding.","title":"Port-forward the UI"},{"location":"quick-start/#install-the-argo-workflows-cli","text":"You can more easily interact with Argo Workflows with the Argo CLI .","title":"Install the Argo Workflows CLI"},{"location":"quick-start/#submitting-an-example-workflow","text":"","title":"Submitting an example workflow"},{"location":"quick-start/#submit-an-example-workflow-cli","text":"argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml The --watch flag used above will allow you to observe the workflow as it runs and the status of whether it succeeds. When the workflow completes, the watch on the workflow will stop. You can list all the Workflows you have submitted by running the command below: argo list -n argo You will notice the Workflow name has a hello-world- prefix followed by random characters. These characters are used to give Workflows unique names to help identify specific runs of a Workflow. If you submitted this Workflow again, the next Workflow run would have a different name. Using the argo get command, you can always review details of a Workflow run. The output for the command below will be the same as the information shown as when you submitted the Workflow: argo get -n argo @latest The @latest argument to the CLI is a short cut to view the latest Workflow run that was executed. You can also observe the logs of the Workflow run by running the following: argo logs -n argo @latest","title":"Submit an example workflow (CLI)"},{"location":"quick-start/#submit-an-example-workflow-gui","text":"Open a port-forward so you can access the UI: kubectl -n argo port-forward deployment/argo-server 2746 :2746 Navigate your browser to https://localhost:2746 . Click + Submit New Workflow and then Edit using full workflow options You can find an example workflow already in the text field. Press + Create to start the workflow.","title":"Submit an example workflow (GUI)"},{"location":"releases/","text":"Releases \u00b6 You can find the most recent version under Github release . Versioning \u00b6 Versions are expressed as x.y.z , where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. Argo Workflows does not use Semantic Versioning. Minor versions may contain breaking changes. Patch versions only contain bug fixes and minor features. For stable , use the latest patch version. \u26a0\ufe0f Read the upgrading guide to find out about breaking changes before any upgrade. Supported Versions \u00b6 We maintain release branches for the most recent two minor releases. Fixes may be back-ported to release branches, depending on severity, risk, and, feasibility. Breaking changes will be documented in upgrading guide . Supported Version Skew \u00b6 Both the argo-server and argocli should be the same version as the controller. Release Cycle \u00b6 New minor versions are released roughly every 6 months. Release candidates (RCs) for major and minor releases are typically available for 4-6 weeks before the release becomes generally available (GA). Features may be shipped in subsequent release candidates. When features are shipped in a new release candidate, the most recent release candidate will be available for at least 2 weeks to ensure it is tested sufficiently before it is pushed to GA. If bugs are found with a feature and are not resolved within the 2 week period, the features will be rolled back so as to be saved for the next major/minor release timeline, and a new release candidate will be cut for testing before pushing to GA. Otherwise, we typically release every two weeks: Patch fixes for the current stable version. The next release candidate, if we are currently in a release-cycle. Kubernetes Compatibility Matrix \u00b6 Argo Workflows \\ Kubernetes 1.17 1.18 1.19 1.20 1.21 1.22 1.23 1.24 1.25 1.26 1.27 3.5 x x x ? ? ? ? ? \u2713 \u2713 \u2713 3.4 x x x ? \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 3.3 ? ? ? ? \u2713 \u2713 \u2713 ? ? ? ? 3.2 ? ? \u2713 \u2713 \u2713 ? ? ? ? ? ? 3.1 \u2713 \u2713 \u2713 ? ? ? ? ? ? ? ? \u2713 Fully supported versions. ? Due to breaking changes might not work. Also, we haven't thoroughly tested against this version. \u2715 Unsupported versions. Notes on Compatibility \u00b6 Argo versions may be compatible with newer and older versions than what it is listed but only three minor versions are supported per Argo release unless otherwise noted. The main branch of Argo Workflows is currently tested on Kubernetes 1.27.","title":"Releases"},{"location":"releases/#releases","text":"You can find the most recent version under Github release .","title":"Releases"},{"location":"releases/#versioning","text":"Versions are expressed as x.y.z , where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. Argo Workflows does not use Semantic Versioning. Minor versions may contain breaking changes. Patch versions only contain bug fixes and minor features. For stable , use the latest patch version. \u26a0\ufe0f Read the upgrading guide to find out about breaking changes before any upgrade.","title":"Versioning"},{"location":"releases/#supported-versions","text":"We maintain release branches for the most recent two minor releases. Fixes may be back-ported to release branches, depending on severity, risk, and, feasibility. Breaking changes will be documented in upgrading guide .","title":"Supported Versions"},{"location":"releases/#supported-version-skew","text":"Both the argo-server and argocli should be the same version as the controller.","title":"Supported Version Skew"},{"location":"releases/#release-cycle","text":"New minor versions are released roughly every 6 months. Release candidates (RCs) for major and minor releases are typically available for 4-6 weeks before the release becomes generally available (GA). Features may be shipped in subsequent release candidates. When features are shipped in a new release candidate, the most recent release candidate will be available for at least 2 weeks to ensure it is tested sufficiently before it is pushed to GA. If bugs are found with a feature and are not resolved within the 2 week period, the features will be rolled back so as to be saved for the next major/minor release timeline, and a new release candidate will be cut for testing before pushing to GA. Otherwise, we typically release every two weeks: Patch fixes for the current stable version. The next release candidate, if we are currently in a release-cycle.","title":"Release Cycle"},{"location":"releases/#kubernetes-compatibility-matrix","text":"Argo Workflows \\ Kubernetes 1.17 1.18 1.19 1.20 1.21 1.22 1.23 1.24 1.25 1.26 1.27 3.5 x x x ? ? ? ? ? \u2713 \u2713 \u2713 3.4 x x x ? \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 3.3 ? ? ? ? \u2713 \u2713 \u2713 ? ? ? ? 3.2 ? ? \u2713 \u2713 \u2713 ? ? ? ? ? ? 3.1 \u2713 \u2713 \u2713 ? ? ? ? ? ? ? ? \u2713 Fully supported versions. ? Due to breaking changes might not work. Also, we haven't thoroughly tested against this version. \u2715 Unsupported versions.","title":"Kubernetes Compatibility Matrix"},{"location":"releases/#notes-on-compatibility","text":"Argo versions may be compatible with newer and older versions than what it is listed but only three minor versions are supported per Argo release unless otherwise noted. The main branch of Argo Workflows is currently tested on Kubernetes 1.27.","title":"Notes on Compatibility"},{"location":"releasing/","text":"Release Instructions \u00b6 Cherry-Picking Fixes \u00b6 \u270b Before you start, make sure you have created a release branch (e.g. release-3.3 ) and it's passing CI. Then get a list of commits you may want to cherry-pick: ./hack/cherry-pick.sh release-3.3 \"fix\" true ./hack/cherry-pick.sh release-3.3 \"chore(deps)\" true ./hack/cherry-pick.sh release-3.3 \"build\" true ./hack/cherry-pick.sh release-3.3 \"ci\" true To automatically cherry-pick, run the following: ./hack/cherry-pick.sh release-3.3 \"fix\" false Then look for \"failed to cherry-pick\" in the log to find commits that fail to be cherry-picked and decide if a manual patch is necessary. Ignore: Fixes for features only on main . Dependency upgrades, unless they fix known security issues. Build or CI improvements, unless the release pipeline is blocked without them. Cherry-pick the first commit. Run make test locally before pushing. If the build timeouts the build caches may have gone, try re-running. Don't cherry-pick another commit until the CI passes. It is harder to find the cause of a new failed build if the last build failed too. Cherry-picking commits one-by-one and then waiting for the CI will take a long time. Instead, cherry-pick each commit then run make test locally before pushing. Publish Release \u00b6 \u270b Before you start, make sure the branch is passing CI. Push a new tag to the release branch. E.g.: git tag v3.3.4 git push upstream v3.3.4 # or origin if you do not use upstream GitHub Actions will automatically build and publish your release. This takes about 1h. Set your self a reminder to check this was successful. Update Changelog \u00b6 Once the tag is published, GitHub Actions will automatically open a PR to update the changelog. Once the PR is ready, you can approve it, enable auto-merge, and then run the following to force trigger the CI build: git branch -D create-pull-request/changelog git fetch upstream git checkout --track upstream/create-pull-request/changelog git commit -s --allow-empty -m \"docs: Force trigger CI\" git push upstream create-pull-request/changelog","title":"Release Instructions"},{"location":"releasing/#release-instructions","text":"","title":"Release Instructions"},{"location":"releasing/#cherry-picking-fixes","text":"\u270b Before you start, make sure you have created a release branch (e.g. release-3.3 ) and it's passing CI. Then get a list of commits you may want to cherry-pick: ./hack/cherry-pick.sh release-3.3 \"fix\" true ./hack/cherry-pick.sh release-3.3 \"chore(deps)\" true ./hack/cherry-pick.sh release-3.3 \"build\" true ./hack/cherry-pick.sh release-3.3 \"ci\" true To automatically cherry-pick, run the following: ./hack/cherry-pick.sh release-3.3 \"fix\" false Then look for \"failed to cherry-pick\" in the log to find commits that fail to be cherry-picked and decide if a manual patch is necessary. Ignore: Fixes for features only on main . Dependency upgrades, unless they fix known security issues. Build or CI improvements, unless the release pipeline is blocked without them. Cherry-pick the first commit. Run make test locally before pushing. If the build timeouts the build caches may have gone, try re-running. Don't cherry-pick another commit until the CI passes. It is harder to find the cause of a new failed build if the last build failed too. Cherry-picking commits one-by-one and then waiting for the CI will take a long time. Instead, cherry-pick each commit then run make test locally before pushing.","title":"Cherry-Picking Fixes"},{"location":"releasing/#publish-release","text":"\u270b Before you start, make sure the branch is passing CI. Push a new tag to the release branch. E.g.: git tag v3.3.4 git push upstream v3.3.4 # or origin if you do not use upstream GitHub Actions will automatically build and publish your release. This takes about 1h. Set your self a reminder to check this was successful.","title":"Publish Release"},{"location":"releasing/#update-changelog","text":"Once the tag is published, GitHub Actions will automatically open a PR to update the changelog. Once the PR is ready, you can approve it, enable auto-merge, and then run the following to force trigger the CI build: git branch -D create-pull-request/changelog git fetch upstream git checkout --track upstream/create-pull-request/changelog git commit -s --allow-empty -m \"docs: Force trigger CI\" git push upstream create-pull-request/changelog","title":"Update Changelog"},{"location":"resource-duration/","text":"Resource Duration \u00b6 v2.7 and after Argo Workflows provides an indication of how much resource your workflow has used and saves this information. This is intended to be an indicative but not accurate value. Calculation \u00b6 The calculation is always an estimate, and is calculated by duration.go based on container duration, specified pod resource requests, limits, or (for memory and CPU) defaults. Each indicator is divided by a common denominator depending on resource type. Base Amounts \u00b6 Each resource type has a denominator used to make large values smaller. CPU: 1 Memory: 100Mi Storage: 10Gi Ephemeral Storage: 10Gi All others: 1 The requested fraction of the base amount will be multiplied by the container's run time to get the container's Resource Duration. For example, if you've requested 50Mi of memory (half of the base amount), and the container runs 120sec, then the reported Resource Duration will be 60sec * (100Mi memory) . Request Defaults \u00b6 If requests are not set for a container, Kubernetes defaults to limits . If limits are not set, Argo falls back to 100m for CPU and 100Mi for memory. Note: these are Argo's defaults, not Kubernetes' defaults. For the most meaningful results, set requests and/or limits for all containers. Example \u00b6 A pod that runs for 3min, with a CPU limit of 2000m , a memory limit of 1Gi and an nvidia.com/gpu resource limit of 1 : CPU: 3min * 2000m / 1000m = 6min * (1 cpu) Memory: 3min * 1Gi / 100Mi = 30min * (100Mi memory) GPU: 3min * 1 / 1 = 3min * (1 nvidia.com/gpu) Web/CLI reporting \u00b6 Both the web and CLI give abbreviated usage, like 9m10s*cpu,6s*memory,2m31s*nvidia.com/gpu . In this context, resources like memory refer to the \"base amounts\". For example, memory means \"amount of time a resource requested 100Mi of memory.\" If a container only uses 10Mi , each second it runs will only count as a tenth-second of memory . Rounding Down \u00b6 For a short running pods (<10s), if the memory request is also small (for example, 10Mi ), then the memory value may be 0s. This is because the denominator is 100Mi .","title":"Resource Duration"},{"location":"resource-duration/#resource-duration","text":"v2.7 and after Argo Workflows provides an indication of how much resource your workflow has used and saves this information. This is intended to be an indicative but not accurate value.","title":"Resource Duration"},{"location":"resource-duration/#calculation","text":"The calculation is always an estimate, and is calculated by duration.go based on container duration, specified pod resource requests, limits, or (for memory and CPU) defaults. Each indicator is divided by a common denominator depending on resource type.","title":"Calculation"},{"location":"resource-duration/#base-amounts","text":"Each resource type has a denominator used to make large values smaller. CPU: 1 Memory: 100Mi Storage: 10Gi Ephemeral Storage: 10Gi All others: 1 The requested fraction of the base amount will be multiplied by the container's run time to get the container's Resource Duration. For example, if you've requested 50Mi of memory (half of the base amount), and the container runs 120sec, then the reported Resource Duration will be 60sec * (100Mi memory) .","title":"Base Amounts"},{"location":"resource-duration/#request-defaults","text":"If requests are not set for a container, Kubernetes defaults to limits . If limits are not set, Argo falls back to 100m for CPU and 100Mi for memory. Note: these are Argo's defaults, not Kubernetes' defaults. For the most meaningful results, set requests and/or limits for all containers.","title":"Request Defaults"},{"location":"resource-duration/#example","text":"A pod that runs for 3min, with a CPU limit of 2000m , a memory limit of 1Gi and an nvidia.com/gpu resource limit of 1 : CPU: 3min * 2000m / 1000m = 6min * (1 cpu) Memory: 3min * 1Gi / 100Mi = 30min * (100Mi memory) GPU: 3min * 1 / 1 = 3min * (1 nvidia.com/gpu)","title":"Example"},{"location":"resource-duration/#webcli-reporting","text":"Both the web and CLI give abbreviated usage, like 9m10s*cpu,6s*memory,2m31s*nvidia.com/gpu . In this context, resources like memory refer to the \"base amounts\". For example, memory means \"amount of time a resource requested 100Mi of memory.\" If a container only uses 10Mi , each second it runs will only count as a tenth-second of memory .","title":"Web/CLI reporting"},{"location":"resource-duration/#rounding-down","text":"For a short running pods (<10s), if the memory request is also small (for example, 10Mi ), then the memory value may be 0s. This is because the denominator is 100Mi .","title":"Rounding Down"},{"location":"resource-template/","text":"Resource Template \u00b6 v2.0 See Kubernetes Resources .","title":"Resource Template"},{"location":"resource-template/#resource-template","text":"v2.0 See Kubernetes Resources .","title":"Resource Template"},{"location":"rest-api/","text":"REST API \u00b6 Argo Server API \u00b6 v2.5 and after Argo Workflows ships with a server that provides more features and security than before. The server can be configured with or without client auth ( server --auth-mode client ). When it is disabled, then clients must pass their KUBECONFIG base 64 encoded in the HTTP Authorization header: ARGO_TOKEN = $( argo auth token ) curl -H \"Authorization: $ARGO_TOKEN \" https://localhost:2746/api/v1/workflows/argo Learn more on how to generate an access token . API reference docs : Latest docs (maybe incorrect) Interactively in the Argo Server UI . (>= v2.10)","title":"REST API"},{"location":"rest-api/#rest-api","text":"","title":"REST API"},{"location":"rest-api/#argo-server-api","text":"v2.5 and after Argo Workflows ships with a server that provides more features and security than before. The server can be configured with or without client auth ( server --auth-mode client ). When it is disabled, then clients must pass their KUBECONFIG base 64 encoded in the HTTP Authorization header: ARGO_TOKEN = $( argo auth token ) curl -H \"Authorization: $ARGO_TOKEN \" https://localhost:2746/api/v1/workflows/argo Learn more on how to generate an access token . API reference docs : Latest docs (maybe incorrect) Interactively in the Argo Server UI . (>= v2.10)","title":"Argo Server API"},{"location":"rest-examples/","text":"API Examples \u00b6 Document contains couple of examples of workflow JSON's to submit via argo-server REST API. v2.5 and after Assuming the namespace of argo-server is argo authentication is turned off (otherwise provide Authorization header) argo-server is available on localhost:2746 Submitting workflow \u00b6 curl --request POST \\ --url https://localhost:2746/api/v1/workflows/argo \\ --header 'content-type: application/json' \\ --data '{ \"namespace\": \"argo\", \"serverDryRun\": false, \"workflow\": { \"metadata\": { \"generateName\": \"hello-world-\", \"namespace\": \"argo\", \"labels\": { \"workflows.argoproj.io/completed\": \"false\" } }, \"spec\": { \"templates\": [ { \"name\": \"whalesay\", \"arguments\": {}, \"inputs\": {}, \"outputs\": {}, \"metadata\": {}, \"container\": { \"name\": \"\", \"image\": \"docker/whalesay:latest\", \"command\": [ \"cowsay\" ], \"args\": [ \"hello world\" ], \"resources\": {} } } ], \"entrypoint\": \"whalesay\", \"arguments\": {} } } }' Getting workflows for namespace argo \u00b6 curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo Getting single workflow for namespace argo \u00b6 curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt Deleting single workflow for namespace argo \u00b6 curl --request DELETE \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt","title":"API Examples"},{"location":"rest-examples/#api-examples","text":"Document contains couple of examples of workflow JSON's to submit via argo-server REST API. v2.5 and after Assuming the namespace of argo-server is argo authentication is turned off (otherwise provide Authorization header) argo-server is available on localhost:2746","title":"API Examples"},{"location":"rest-examples/#submitting-workflow","text":"curl --request POST \\ --url https://localhost:2746/api/v1/workflows/argo \\ --header 'content-type: application/json' \\ --data '{ \"namespace\": \"argo\", \"serverDryRun\": false, \"workflow\": { \"metadata\": { \"generateName\": \"hello-world-\", \"namespace\": \"argo\", \"labels\": { \"workflows.argoproj.io/completed\": \"false\" } }, \"spec\": { \"templates\": [ { \"name\": \"whalesay\", \"arguments\": {}, \"inputs\": {}, \"outputs\": {}, \"metadata\": {}, \"container\": { \"name\": \"\", \"image\": \"docker/whalesay:latest\", \"command\": [ \"cowsay\" ], \"args\": [ \"hello world\" ], \"resources\": {} } } ], \"entrypoint\": \"whalesay\", \"arguments\": {} } } }'","title":"Submitting workflow"},{"location":"rest-examples/#getting-workflows-for-namespace-argo","text":"curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo","title":"Getting workflows for namespace argo"},{"location":"rest-examples/#getting-single-workflow-for-namespace-argo","text":"curl --request GET \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt","title":"Getting single workflow for namespace argo"},{"location":"rest-examples/#deleting-single-workflow-for-namespace-argo","text":"curl --request DELETE \\ --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt","title":"Deleting single workflow for namespace argo"},{"location":"retries/","text":"Retries \u00b6 Argo Workflows offers a range of options for retrying failed steps. Configuring retryStrategy in WorkflowSpec \u00b6 apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-container- spec : entrypoint : retry-container templates : - name : retry-container retryStrategy : limit : \"10\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] The retryPolicy and expression are re-evaluated after each attempt. For example, if you set retryPolicy: OnFailure and your first attempt produces a failure then a retry will be attempted. If the second attempt produces an error, then another attempt will not be made. Retry policies \u00b6 Use retryPolicy to choose which failure types to retry: Always : Retry all failed steps OnFailure : Retry steps whose main container is marked as failed in Kubernetes OnError : Retry steps that encounter Argo controller errors, or whose init or wait containers fail OnTransientError : Retry steps that encounter errors defined as transient , or errors matching the TRANSIENT_ERROR_PATTERN environment variable . Available in version 3.0 and later. The retryPolicy applies even if you also specify an expression , but in version 3.5 or later the default policy means the expression makes the decision unless you explicitly specify a policy. The default retryPolicy is OnFailure , except in version 3.5 or later when an expression is also supplied, when it is Always . This may be easier to understand in this diagram. flowchart LR start([Will a retry be attempted]) start --> policy policy(Policy Specified?) policy-->|No|expressionNoPolicy policy-->|Yes|policyGiven policyGiven(Expression Specified?) policyGiven-->|No|policyGivenApplies policyGiven-->|Yes|policyAndExpression policyGivenApplies(Supplied Policy) policyAndExpression(Supplied Policy AND Expression) expressionNoPolicy(Expression specified?) expressionNoPolicy-->|No|onfailureNoExpr expressionNoPolicy-->|Yes|version onfailureNoExpr[OnFailure] onfailure[OnFailure AND Expression] version(Workflows version) version-->|3.4 or ealier|onfailure always[Only Expression matters] version-->|3.5 or later|always An example retry strategy: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-on-error- spec : entrypoint : error-container templates : - name : error-container retryStrategy : limit : \"2\" retryPolicy : \"Always\" container : image : python command : [ \"python\" , \"-c\" ] # fail with a 80% probability args : [ \"import random; import sys; exit_code = random.choice(range(0, 5)); sys.exit(exit_code)\" ] Conditional retries \u00b6 v3.2 and after You can also use expression to control retries. The expression field accepts an expr expression and has access to the following variables: lastRetry.exitCode : The exit code of the last retry, or \"-1\" if not available lastRetry.status : The phase of the last retry: Error, Failed lastRetry.duration : The duration of the last retry, in seconds lastRetry.message : The message output from the last retry (available from version 3.5) If expression evaluates to false, the step will not be retried. The expression result will be logical and with the retryPolicy . Both must be true to retry. See example for usage. Back-Off \u00b6 You can configure the delay between retries with backoff . See example for usage.","title":"Retries"},{"location":"retries/#retries","text":"Argo Workflows offers a range of options for retrying failed steps.","title":"Retries"},{"location":"retries/#configuring-retrystrategy-in-workflowspec","text":"apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-container- spec : entrypoint : retry-container templates : - name : retry-container retryStrategy : limit : \"10\" container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] The retryPolicy and expression are re-evaluated after each attempt. For example, if you set retryPolicy: OnFailure and your first attempt produces a failure then a retry will be attempted. If the second attempt produces an error, then another attempt will not be made.","title":"Configuring retryStrategy in WorkflowSpec"},{"location":"retries/#retry-policies","text":"Use retryPolicy to choose which failure types to retry: Always : Retry all failed steps OnFailure : Retry steps whose main container is marked as failed in Kubernetes OnError : Retry steps that encounter Argo controller errors, or whose init or wait containers fail OnTransientError : Retry steps that encounter errors defined as transient , or errors matching the TRANSIENT_ERROR_PATTERN environment variable . Available in version 3.0 and later. The retryPolicy applies even if you also specify an expression , but in version 3.5 or later the default policy means the expression makes the decision unless you explicitly specify a policy. The default retryPolicy is OnFailure , except in version 3.5 or later when an expression is also supplied, when it is Always . This may be easier to understand in this diagram. flowchart LR start([Will a retry be attempted]) start --> policy policy(Policy Specified?) policy-->|No|expressionNoPolicy policy-->|Yes|policyGiven policyGiven(Expression Specified?) policyGiven-->|No|policyGivenApplies policyGiven-->|Yes|policyAndExpression policyGivenApplies(Supplied Policy) policyAndExpression(Supplied Policy AND Expression) expressionNoPolicy(Expression specified?) expressionNoPolicy-->|No|onfailureNoExpr expressionNoPolicy-->|Yes|version onfailureNoExpr[OnFailure] onfailure[OnFailure AND Expression] version(Workflows version) version-->|3.4 or ealier|onfailure always[Only Expression matters] version-->|3.5 or later|always An example retry strategy: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-on-error- spec : entrypoint : error-container templates : - name : error-container retryStrategy : limit : \"2\" retryPolicy : \"Always\" container : image : python command : [ \"python\" , \"-c\" ] # fail with a 80% probability args : [ \"import random; import sys; exit_code = random.choice(range(0, 5)); sys.exit(exit_code)\" ]","title":"Retry policies"},{"location":"retries/#conditional-retries","text":"v3.2 and after You can also use expression to control retries. The expression field accepts an expr expression and has access to the following variables: lastRetry.exitCode : The exit code of the last retry, or \"-1\" if not available lastRetry.status : The phase of the last retry: Error, Failed lastRetry.duration : The duration of the last retry, in seconds lastRetry.message : The message output from the last retry (available from version 3.5) If expression evaluates to false, the step will not be retried. The expression result will be logical and with the retryPolicy . Both must be true to retry. See example for usage.","title":"Conditional retries"},{"location":"retries/#back-off","text":"You can configure the delay between retries with backoff . See example for usage.","title":"Back-Off"},{"location":"roadmap/","text":"Roadmap \u00b6 The roadmap is currently being revamped. If you want to join the discussions, please join our contributors meeting .","title":"Roadmap"},{"location":"roadmap/#roadmap","text":"The roadmap is currently being revamped. If you want to join the discussions, please join our contributors meeting .","title":"Roadmap"},{"location":"running-at-massive-scale/","text":"Running At Massive Scale \u00b6 Argo Workflows is an incredibly scalable tool for orchestrating workflows. It empowers you to process thousands of workflows per day, with each workflow consisting of tens of thousands of nodes. Moreover, it effortlessly handles hundreds of thousands of smaller workflows daily. However, optimizing your setup is crucial to fully leverage this capability. Run The Latest Version \u00b6 You must be running at least v3.1 for several recommendations to work. Upgrade to the very latest patch. Performance fixes often come in patches. Test Your Cluster Before You Install Argo Workflows \u00b6 You'll need a big cluster, with a big Kubernetes master. Users often encounter problems with Kubernetes needing to be configured for the scale. E.g. Kubernetes API server being too small. We recommend you test your cluster to make sure it can run the number of pods they need, even before installing Argo. Create pods at the rate you expect that it'll be created in production. Make sure Kubernetes can keep up with requests to delete pods at the same rate. You'll need to GC data quickly. The less data that Kubernetes and Argo deal with, the less work they need to do. Use pod GC and workflow GC to achieve this. Overwhelmed Kubernetes API \u00b6 Where Argo has a lot of work to do, the Kubernetes API can be overwhelmed. There are several strategies to reduce this: Use the Emissary executor (>= v3.1). This does not make any Kubernetes API requests (except for resources template). Limit the number of concurrent workflows using parallelism. Rate-limit pod creation configuration (>= v3.1). Set DEFAULT_REQUEUE_TIME=1m Overwhelmed Database \u00b6 If you're running workflows with many nodes, you'll probably be offloading data to a database. Offloaded data is kept for 5m. You can reduce the number of records created by setting DEFAULT_REQUEUE_TIME=1m . This will slow reconciliation, but will suit workflows where nodes run for over 1m. Miscellaneous \u00b6 See also Scaling .","title":"Running At Massive Scale"},{"location":"running-at-massive-scale/#running-at-massive-scale","text":"Argo Workflows is an incredibly scalable tool for orchestrating workflows. It empowers you to process thousands of workflows per day, with each workflow consisting of tens of thousands of nodes. Moreover, it effortlessly handles hundreds of thousands of smaller workflows daily. However, optimizing your setup is crucial to fully leverage this capability.","title":"Running At Massive Scale"},{"location":"running-at-massive-scale/#run-the-latest-version","text":"You must be running at least v3.1 for several recommendations to work. Upgrade to the very latest patch. Performance fixes often come in patches.","title":"Run The Latest Version"},{"location":"running-at-massive-scale/#test-your-cluster-before-you-install-argo-workflows","text":"You'll need a big cluster, with a big Kubernetes master. Users often encounter problems with Kubernetes needing to be configured for the scale. E.g. Kubernetes API server being too small. We recommend you test your cluster to make sure it can run the number of pods they need, even before installing Argo. Create pods at the rate you expect that it'll be created in production. Make sure Kubernetes can keep up with requests to delete pods at the same rate. You'll need to GC data quickly. The less data that Kubernetes and Argo deal with, the less work they need to do. Use pod GC and workflow GC to achieve this.","title":"Test Your Cluster Before You Install Argo Workflows"},{"location":"running-at-massive-scale/#overwhelmed-kubernetes-api","text":"Where Argo has a lot of work to do, the Kubernetes API can be overwhelmed. There are several strategies to reduce this: Use the Emissary executor (>= v3.1). This does not make any Kubernetes API requests (except for resources template). Limit the number of concurrent workflows using parallelism. Rate-limit pod creation configuration (>= v3.1). Set DEFAULT_REQUEUE_TIME=1m","title":"Overwhelmed Kubernetes API"},{"location":"running-at-massive-scale/#overwhelmed-database","text":"If you're running workflows with many nodes, you'll probably be offloading data to a database. Offloaded data is kept for 5m. You can reduce the number of records created by setting DEFAULT_REQUEUE_TIME=1m . This will slow reconciliation, but will suit workflows where nodes run for over 1m.","title":"Overwhelmed Database"},{"location":"running-at-massive-scale/#miscellaneous","text":"See also Scaling .","title":"Miscellaneous"},{"location":"running-locally/","text":"Running Locally \u00b6 You have two options: Use the Dev Container . This takes about 7 minutes. This can be used with VSCode, the devcontainer CLI, or GitHub Codespaces. Install the requirements on your computer manually. This takes about 1 hour. Development Container \u00b6 The development container should be able to do everything you need to do to develop Argo Workflows without installing tools on your local machine. It takes quite a long time to build the container. It runs k3d inside the container so you have a cluster to test against. To communicate with services running either in other development containers or directly on the local machine (e.g. a database), the following URL can be used in the workflow spec: host.docker.internal: . This facilitates the implementation of workflows which need to connect to a database or an API server. You can use the development container in a few different ways: Visual Studio Code with Dev Containers extension . Open your argo-workflows folder in VSCode and it should offer to use the development container automatically. VSCode will allow you to forward ports to allow your external browser to access the running components. devcontainer CLI . Once installed, go to your argo-workflows folder and run devcontainer up --workspace-folder . followed by devcontainer exec --workspace-folder . /bin/bash to get a shell where you can build the code. You can use any editor outside the container to edit code; any changes will be mirrored inside the container. Due to a limitation of the CLI, only port 8080 (the Web UI) will be exposed for you to access if you run this way. Other services are usable from the shell inside. GitHub Codespaces . You can start editing as soon as VSCode is open, though you may want to wait for pre-build.sh to finish installing dependencies, building binaries, and setting up the cluster before running any commands in the terminal. Once you start running services (see next steps below), you can click on the \"PORTS\" tab in the VSCode terminal to see all forwarded ports. You can open the Web UI in a new tab from there. Once you have entered the container, continue to Developing Locally . Note: for Apple Silicon This platform can spend 3 times the indicated time Configure Docker Desktop to use BuildKit: \"features\" : { \"buildkit\" : true }, For Windows WSL2 Configure .wslconfig to limit memory usage by the WSL2 to prevent VSCode OOM. For Linux Use Docker Desktop instead of Docker Engine to prevent incorrect network configuration by k3d. Requirements \u00b6 Clone the Git repo into: $GOPATH/src/github.com/argoproj/argo-workflows . Any other path will break the code generation. Add the following to your /etc/hosts : 127.0.0.1 dex 127.0.0.1 minio 127.0.0.1 postgres 127.0.0.1 mysql 127.0.0.1 azurite To build on your own machine without using the Dev Container you will need: Go Yarn Docker protoc node for running the UI A local Kubernetes cluster ( k3d , kind , or minikube ) We recommend using K3D to set up the local Kubernetes cluster since this will allow you to test RBAC set-up and is fast. You can set-up K3D to be part of your default kube config as follows: k3d cluster start --wait Alternatively, you can use Minikube to set up the local Kubernetes cluster. Once a local Kubernetes cluster has started via minikube start , your kube config will use Minikube's context automatically. Warning Do not use Docker Desktop's embedded Kubernetes, it does not support Kubernetes RBAC (i.e. kubectl auth can-i always returns allowed ). Developing locally \u00b6 To start: The controller, so you can run workflows. MinIO ( http://localhost:9000 , use admin/password) so you can use artifacts. Run: make start Make sure you don't see any errors in your terminal. This runs the Workflow Controller locally on your machine (not in Docker/Kubernetes). You can submit a workflow for testing using kubectl : kubectl create -f examples/hello-world.yaml We recommend running make clean before make start to ensure recompilation. If you made changes to the executor, you need to build the image: make argoexec-image To also start the API on http://localhost:2746 : make start API = true This runs the Argo Server (in addition to the Workflow Controller) locally on your machine. To also start the UI on http://localhost:8080 ( UI=true implies API=true ): make start UI = true If you are making change to the CLI (i.e. Argo Server), you can build it separately if you want: make cli ./dist/argo submit examples/hello-world.yaml ; # new CLI is created as `./dist/argo` Although, note that this will be built automatically if you do: make start API=true . To test the workflow archive, use PROFILE=mysql or PROFILE=postgres : make start PROFILE = mysql You'll have, either: Postgres on http://localhost:5432 , run make postgres-cli to access. MySQL on http://localhost:3306 , run make mysql-cli to access. To test SSO integration, use PROFILE=sso : make start UI = true PROFILE = sso Running E2E tests locally \u00b6 Start up Argo Workflows using the following: make start PROFILE = mysql AUTH_MODE = client STATIC_FILES = false API = true If you want to run Azure tests against a local Azurite: kubectl -n $KUBE_NAMESPACE apply -f test/e2e/azure/deploy-azurite.yaml make start Running One Test \u00b6 In most cases, you want to run the test that relates to your changes locally. You should not run all the tests suites. Our CI will run those concurrently when you create a PR, which will give you feedback much faster. Find the test that you want to run in test/e2e make TestArtifactServer Running A Set Of Tests \u00b6 You can find the build tag at the top of the test file. //go:build api You need to run make test-{buildTag} , so for api that would be: make test-api Diagnosing Test Failure \u00b6 Tests often fail: that's good. To diagnose failure: Run kubectl get pods , are pods in the state you expect? Run kubectl get wf , is your workflow in the state you expect? What do the pod logs say? I.e. kubectl logs . Check the controller and argo-server logs. These are printed to the console you ran make start in. Is anything logged at level=error ? If tests run slowly or time out, factory reset your Kubernetes cluster. Committing \u00b6 Before you commit code and raise a PR, always run: make pre-commit -B Please do the following when creating your PR: Sign-off your commits. Use Conventional Commit messages . Suffix the issue number. Examples: git commit --signoff -m 'fix: Fixed broken thing. Fixes #1234' git commit --signoff -m 'feat: Added a new feature. Fixes #1234' Troubleshooting \u00b6 When running make pre-commit -B , if you encounter errors like make: *** [pkg/apiclient/clusterworkflowtemplate/cluster-workflow-template.swagger.json] Error 1 , ensure that you have checked out your code into $GOPATH/src/github.com/argoproj/argo-workflows . If you encounter \"out of heap\" issues when building UI through Docker, please validate resources allocated to Docker. Compilation may fail if allocated RAM is less than 4Gi. To start profiling with pprof , pass ARGO_PPROF=true when starting the controller locally. Then run the following: go tool pprof http://localhost:6060/debug/pprof/profile # 30-second CPU profile go tool pprof http://localhost:6060/debug/pprof/heap # heap profile go tool pprof http://localhost:6060/debug/pprof/block # goroutine blocking profile Using Multiple Terminals \u00b6 I run the controller in one terminal, and the UI in another. I like the UI: it is much faster to debug workflows than the terminal. This allows you to make changes to the controller and re-start it, without restarting the UI (which I think takes too long to start-up). As a convenience, CTRL=false implies UI=true , so just run: make start CTRL = false","title":"Running Locally"},{"location":"running-locally/#running-locally","text":"You have two options: Use the Dev Container . This takes about 7 minutes. This can be used with VSCode, the devcontainer CLI, or GitHub Codespaces. Install the requirements on your computer manually. This takes about 1 hour.","title":"Running Locally"},{"location":"running-locally/#development-container","text":"The development container should be able to do everything you need to do to develop Argo Workflows without installing tools on your local machine. It takes quite a long time to build the container. It runs k3d inside the container so you have a cluster to test against. To communicate with services running either in other development containers or directly on the local machine (e.g. a database), the following URL can be used in the workflow spec: host.docker.internal: . This facilitates the implementation of workflows which need to connect to a database or an API server. You can use the development container in a few different ways: Visual Studio Code with Dev Containers extension . Open your argo-workflows folder in VSCode and it should offer to use the development container automatically. VSCode will allow you to forward ports to allow your external browser to access the running components. devcontainer CLI . Once installed, go to your argo-workflows folder and run devcontainer up --workspace-folder . followed by devcontainer exec --workspace-folder . /bin/bash to get a shell where you can build the code. You can use any editor outside the container to edit code; any changes will be mirrored inside the container. Due to a limitation of the CLI, only port 8080 (the Web UI) will be exposed for you to access if you run this way. Other services are usable from the shell inside. GitHub Codespaces . You can start editing as soon as VSCode is open, though you may want to wait for pre-build.sh to finish installing dependencies, building binaries, and setting up the cluster before running any commands in the terminal. Once you start running services (see next steps below), you can click on the \"PORTS\" tab in the VSCode terminal to see all forwarded ports. You can open the Web UI in a new tab from there. Once you have entered the container, continue to Developing Locally . Note: for Apple Silicon This platform can spend 3 times the indicated time Configure Docker Desktop to use BuildKit: \"features\" : { \"buildkit\" : true }, For Windows WSL2 Configure .wslconfig to limit memory usage by the WSL2 to prevent VSCode OOM. For Linux Use Docker Desktop instead of Docker Engine to prevent incorrect network configuration by k3d.","title":"Development Container"},{"location":"running-locally/#requirements","text":"Clone the Git repo into: $GOPATH/src/github.com/argoproj/argo-workflows . Any other path will break the code generation. Add the following to your /etc/hosts : 127.0.0.1 dex 127.0.0.1 minio 127.0.0.1 postgres 127.0.0.1 mysql 127.0.0.1 azurite To build on your own machine without using the Dev Container you will need: Go Yarn Docker protoc node for running the UI A local Kubernetes cluster ( k3d , kind , or minikube ) We recommend using K3D to set up the local Kubernetes cluster since this will allow you to test RBAC set-up and is fast. You can set-up K3D to be part of your default kube config as follows: k3d cluster start --wait Alternatively, you can use Minikube to set up the local Kubernetes cluster. Once a local Kubernetes cluster has started via minikube start , your kube config will use Minikube's context automatically. Warning Do not use Docker Desktop's embedded Kubernetes, it does not support Kubernetes RBAC (i.e. kubectl auth can-i always returns allowed ).","title":"Requirements"},{"location":"running-locally/#developing-locally","text":"To start: The controller, so you can run workflows. MinIO ( http://localhost:9000 , use admin/password) so you can use artifacts. Run: make start Make sure you don't see any errors in your terminal. This runs the Workflow Controller locally on your machine (not in Docker/Kubernetes). You can submit a workflow for testing using kubectl : kubectl create -f examples/hello-world.yaml We recommend running make clean before make start to ensure recompilation. If you made changes to the executor, you need to build the image: make argoexec-image To also start the API on http://localhost:2746 : make start API = true This runs the Argo Server (in addition to the Workflow Controller) locally on your machine. To also start the UI on http://localhost:8080 ( UI=true implies API=true ): make start UI = true If you are making change to the CLI (i.e. Argo Server), you can build it separately if you want: make cli ./dist/argo submit examples/hello-world.yaml ; # new CLI is created as `./dist/argo` Although, note that this will be built automatically if you do: make start API=true . To test the workflow archive, use PROFILE=mysql or PROFILE=postgres : make start PROFILE = mysql You'll have, either: Postgres on http://localhost:5432 , run make postgres-cli to access. MySQL on http://localhost:3306 , run make mysql-cli to access. To test SSO integration, use PROFILE=sso : make start UI = true PROFILE = sso","title":"Developing locally"},{"location":"running-locally/#running-e2e-tests-locally","text":"Start up Argo Workflows using the following: make start PROFILE = mysql AUTH_MODE = client STATIC_FILES = false API = true If you want to run Azure tests against a local Azurite: kubectl -n $KUBE_NAMESPACE apply -f test/e2e/azure/deploy-azurite.yaml make start","title":"Running E2E tests locally"},{"location":"running-locally/#running-one-test","text":"In most cases, you want to run the test that relates to your changes locally. You should not run all the tests suites. Our CI will run those concurrently when you create a PR, which will give you feedback much faster. Find the test that you want to run in test/e2e make TestArtifactServer","title":"Running One Test"},{"location":"running-locally/#running-a-set-of-tests","text":"You can find the build tag at the top of the test file. //go:build api You need to run make test-{buildTag} , so for api that would be: make test-api","title":"Running A Set Of Tests"},{"location":"running-locally/#diagnosing-test-failure","text":"Tests often fail: that's good. To diagnose failure: Run kubectl get pods , are pods in the state you expect? Run kubectl get wf , is your workflow in the state you expect? What do the pod logs say? I.e. kubectl logs . Check the controller and argo-server logs. These are printed to the console you ran make start in. Is anything logged at level=error ? If tests run slowly or time out, factory reset your Kubernetes cluster.","title":"Diagnosing Test Failure"},{"location":"running-locally/#committing","text":"Before you commit code and raise a PR, always run: make pre-commit -B Please do the following when creating your PR: Sign-off your commits. Use Conventional Commit messages . Suffix the issue number. Examples: git commit --signoff -m 'fix: Fixed broken thing. Fixes #1234' git commit --signoff -m 'feat: Added a new feature. Fixes #1234'","title":"Committing"},{"location":"running-locally/#troubleshooting","text":"When running make pre-commit -B , if you encounter errors like make: *** [pkg/apiclient/clusterworkflowtemplate/cluster-workflow-template.swagger.json] Error 1 , ensure that you have checked out your code into $GOPATH/src/github.com/argoproj/argo-workflows . If you encounter \"out of heap\" issues when building UI through Docker, please validate resources allocated to Docker. Compilation may fail if allocated RAM is less than 4Gi. To start profiling with pprof , pass ARGO_PPROF=true when starting the controller locally. Then run the following: go tool pprof http://localhost:6060/debug/pprof/profile # 30-second CPU profile go tool pprof http://localhost:6060/debug/pprof/heap # heap profile go tool pprof http://localhost:6060/debug/pprof/block # goroutine blocking profile","title":"Troubleshooting"},{"location":"running-locally/#using-multiple-terminals","text":"I run the controller in one terminal, and the UI in another. I like the UI: it is much faster to debug workflows than the terminal. This allows you to make changes to the controller and re-start it, without restarting the UI (which I think takes too long to start-up). As a convenience, CTRL=false implies UI=true , so just run: make start CTRL = false","title":"Using Multiple Terminals"},{"location":"running-nix/","text":"Try Argo using Nix \u00b6 Nix is a package manager / build tool which focuses on reproducible build environments. Argo Workflows has some basic support for Nix which is enough to get Argo Workflows up and running with minimal effort. Here are the steps to follow: Modify your hosts file and set up a Kubernetes cluster according to Running Locally . Don't worry about the other instructions. Install Nix . Run nix develop --extra-experimental-features nix-command --extra-experimental-features flakes ./dev/nix/ --impure (you can add the extra features as a default in your nix.conf file). Run devenv up . Warning \u00b6 This is still bare-bones at the moment, any feature in the Makefile not mentioned here is excluded for now. In practice, this means that only a make start UI=true equivalent is supported at the moment. As an additional caveat, there are no LDFlags set in the build; as a result the UI will show 0.0.0-unknown for the version. How do I upgrade a dependency? \u00b6 Most dependencies are in the Nix packages repository but if you want a specific version, you might have to build it yourself. This is fairly trivial in Nix, the idea is to just change the version string to whatever package you are concerned about. Changing a python dependency version \u00b6 If we look at the mkdocs dependency, we see a call to buildPythonPackage , to change the version we need to just modify the version string. Doing this will display a failure because the hash from the fetchPypi command will now differ, it will also display the correct hash, copy this hash and replace the existing hash value. Changing a go dependency version \u00b6 The almost exact same principles apply here, the only difference being you must change the vendorHash and the sha256 fields. The vendorHash is a hash of the vendored dependencies while the sha256 is for the sources fetched from the fetchFromGithub call. Why am I getting a vendorSha256 mismatch ? \u00b6 Unfortunately, dependabot is not capable of upgrading flakes automatically, when the go modules are automatically upgraded the hash of the vendor dependencies changes but this change isn't automatically reflected in the nix file. The vendorSha256 field that needs to be upgraded can be found by searching for ${package.name} = pkgs.buildGoModule in the nix file.","title":"Try Argo using Nix"},{"location":"running-nix/#try-argo-using-nix","text":"Nix is a package manager / build tool which focuses on reproducible build environments. Argo Workflows has some basic support for Nix which is enough to get Argo Workflows up and running with minimal effort. Here are the steps to follow: Modify your hosts file and set up a Kubernetes cluster according to Running Locally . Don't worry about the other instructions. Install Nix . Run nix develop --extra-experimental-features nix-command --extra-experimental-features flakes ./dev/nix/ --impure (you can add the extra features as a default in your nix.conf file). Run devenv up .","title":"Try Argo using Nix"},{"location":"running-nix/#warning","text":"This is still bare-bones at the moment, any feature in the Makefile not mentioned here is excluded for now. In practice, this means that only a make start UI=true equivalent is supported at the moment. As an additional caveat, there are no LDFlags set in the build; as a result the UI will show 0.0.0-unknown for the version.","title":"Warning"},{"location":"running-nix/#how-do-i-upgrade-a-dependency","text":"Most dependencies are in the Nix packages repository but if you want a specific version, you might have to build it yourself. This is fairly trivial in Nix, the idea is to just change the version string to whatever package you are concerned about.","title":"How do I upgrade a dependency?"},{"location":"running-nix/#changing-a-python-dependency-version","text":"If we look at the mkdocs dependency, we see a call to buildPythonPackage , to change the version we need to just modify the version string. Doing this will display a failure because the hash from the fetchPypi command will now differ, it will also display the correct hash, copy this hash and replace the existing hash value.","title":"Changing a python dependency version"},{"location":"running-nix/#changing-a-go-dependency-version","text":"The almost exact same principles apply here, the only difference being you must change the vendorHash and the sha256 fields. The vendorHash is a hash of the vendored dependencies while the sha256 is for the sources fetched from the fetchFromGithub call.","title":"Changing a go dependency version"},{"location":"running-nix/#why-am-i-getting-a-vendorsha256-mismatch","text":"Unfortunately, dependabot is not capable of upgrading flakes automatically, when the go modules are automatically upgraded the hash of the vendor dependencies changes but this change isn't automatically reflected in the nix file. The vendorSha256 field that needs to be upgraded can be found by searching for ${package.name} = pkgs.buildGoModule in the nix file.","title":"Why am I getting a vendorSha256 mismatch ?"},{"location":"scaling/","text":"Scaling \u00b6 For running large workflows, you'll typically need to scale the controller to match. Horizontally Scaling \u00b6 You cannot horizontally scale the controller. v3.0 As of v3.0, the controller supports having a hot-standby for High Availability . Vertically Scaling \u00b6 You can scale the controller vertically in these ways: Container Resource Requests \u00b6 If you observe the Controller using its total CPU or memory requests, you should increase those. Adding Goroutines to Increase Concurrency \u00b6 If you have sufficient CPU cores, you can take advantage of them with more goroutines: If you have many Workflows and you notice they're not being reconciled fast enough, increase --workflow-workers . If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase --workflow-ttl-workers . If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase --pod-cleanup-workers . v3.5 and after If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase --cron-workflow-workers . K8S API Client Side Rate Limiting \u00b6 The K8S client library rate limits the messages that can go out. If you frequently see messages similar to this in the Controller log (issued by the library): Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Or, in >= v3.5, if you see warnings similar to this (could be any CR, not just WorkflowTemplate ): Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Then, if your K8S API Server can handle more requests: Increase both --qps and --burst arguments for the Controller. The qps value indicates the average number of queries per second allowed by the K8S Client. The burst value is the number of queries/sec the Client receives before it starts enforcing qps , so typically burst > qps . If not set, the default values are qps=20 and burst=30 (as of v3.5 (refer to cmd/workflow-controller/main.go in case the values change)). Sharding \u00b6 One Install Per Namespace \u00b6 Rather than running a single installation in your cluster, run one per namespace using the --namespaced flag. Instance ID \u00b6 Within a cluster can use instance ID to run N Argo instances within a cluster. Create one namespace for each Argo, e.g. argo-i1 , argo-i2 :. Edit workflow-controller-configmap.yaml for each namespace to set an instance ID. apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : instanceID : i1 v2.9 and after You may need to pass the instance ID to the CLI: argo --instanceid i1 submit my-wf.yaml You do not need to have one instance ID per namespace, you could have many or few. Maximum Recursion Depth \u00b6 In order to protect users against infinite recursion, the controller has a default maximum recursion depth of 100 calls to templates. This protection can be disabled with the environment variable DISABLE_MAX_RECURSION=true Miscellaneous \u00b6 See also Running At Massive Scale .","title":"Scaling"},{"location":"scaling/#scaling","text":"For running large workflows, you'll typically need to scale the controller to match.","title":"Scaling"},{"location":"scaling/#horizontally-scaling","text":"You cannot horizontally scale the controller. v3.0 As of v3.0, the controller supports having a hot-standby for High Availability .","title":"Horizontally Scaling"},{"location":"scaling/#vertically-scaling","text":"You can scale the controller vertically in these ways:","title":"Vertically Scaling"},{"location":"scaling/#container-resource-requests","text":"If you observe the Controller using its total CPU or memory requests, you should increase those.","title":"Container Resource Requests"},{"location":"scaling/#adding-goroutines-to-increase-concurrency","text":"If you have sufficient CPU cores, you can take advantage of them with more goroutines: If you have many Workflows and you notice they're not being reconciled fast enough, increase --workflow-workers . If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase --workflow-ttl-workers . If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase --pod-cleanup-workers . v3.5 and after If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase --cron-workflow-workers .","title":"Adding Goroutines to Increase Concurrency"},{"location":"scaling/#k8s-api-client-side-rate-limiting","text":"The K8S client library rate limits the messages that can go out. If you frequently see messages similar to this in the Controller log (issued by the library): Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Or, in >= v3.5, if you see warnings similar to this (could be any CR, not just WorkflowTemplate ): Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t Then, if your K8S API Server can handle more requests: Increase both --qps and --burst arguments for the Controller. The qps value indicates the average number of queries per second allowed by the K8S Client. The burst value is the number of queries/sec the Client receives before it starts enforcing qps , so typically burst > qps . If not set, the default values are qps=20 and burst=30 (as of v3.5 (refer to cmd/workflow-controller/main.go in case the values change)).","title":"K8S API Client Side Rate Limiting"},{"location":"scaling/#sharding","text":"","title":"Sharding"},{"location":"scaling/#one-install-per-namespace","text":"Rather than running a single installation in your cluster, run one per namespace using the --namespaced flag.","title":"One Install Per Namespace"},{"location":"scaling/#instance-id","text":"Within a cluster can use instance ID to run N Argo instances within a cluster. Create one namespace for each Argo, e.g. argo-i1 , argo-i2 :. Edit workflow-controller-configmap.yaml for each namespace to set an instance ID. apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : instanceID : i1 v2.9 and after You may need to pass the instance ID to the CLI: argo --instanceid i1 submit my-wf.yaml You do not need to have one instance ID per namespace, you could have many or few.","title":"Instance ID"},{"location":"scaling/#maximum-recursion-depth","text":"In order to protect users against infinite recursion, the controller has a default maximum recursion depth of 100 calls to templates. This protection can be disabled with the environment variable DISABLE_MAX_RECURSION=true","title":"Maximum Recursion Depth"},{"location":"scaling/#miscellaneous","text":"See also Running At Massive Scale .","title":"Miscellaneous"},{"location":"security/","text":"Security \u00b6 To report security issues . \ud83d\udca1 Read Practical Argo Workflows Hardening . Workflow Controller Security \u00b6 This has three parts. Controller Permissions \u00b6 The controller has permission (via Kubernetes RBAC + its config map) with either all namespaces (cluster-scope install) or a single managed namespace (namespace-install), notably: List/get/update workflows, and cron-workflows. Create/get/delete pods, PVCs, and PDBs. List/get template, config maps, service accounts, and secrets. See workflow-controller-cluster-role.yaml or workflow-controller-role.yaml User Permissions \u00b6 Users minimally need permission to create/read workflows. The controller will then create workflow pods (config maps etc) on behalf of the users, even if the user does not have permission to do this themselves. The controller will only create workflow pods in the workflow's namespace. A way to think of this is that, if the user has permission to create a workflow in a namespace, then it is OK to create pods or anything else for them in that namespace. If the user only has permission to create workflows, then they will be typically unable to configure other necessary resources such as config maps, or view the outcome of their workflow. This is useful when the user is a service. Warning If you allow users to create workflows in the controller's namespace (typically argo ), it may be possible for users to modify the controller itself. In a namespace-install the managed namespace should therefore not be the controller's namespace. You can typically further restrict what a user can do to just being able to submit workflows from templates using the workflow restrictions feature . UI Access \u00b6 If you want a user to have read-only access to the entirety of the Argo UI for their namespace, a sample role for them may look like: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : ui-user-read-only rules : # k8s standard APIs - apiGroups : - \"\" resources : - events - pods - pods/log verbs : - get - list - watch # Argo APIs. See also https://github.com/argoproj/argo-workflows/blob/main/manifests/cluster-install/workflow-controller-rbac/workflow-aggregate-roles.yaml#L4 - apiGroups : - argoproj.io resources : - eventsources - sensors - workflows - workfloweventbindings - workflowtemplates - clusterworkflowtemplates - cronworkflows - cronworkflows - workflowtaskresults verbs : - get - list - watch Workflow Pod Permissions \u00b6 Workflow pods run using either: The default service account. The service account declared in the workflow spec. There is no restriction on which service account in a namespace may be used. This service account typically needs permissions . Different service accounts should be used if a workflow pod needs to have elevated permissions, e.g. to create other resources. The main container will have the service account token mounted, allowing the main container to patch pods (among other permissions). Set automountServiceAccountToken to false to prevent this. See fields . By default, workflows pods run as root . To further secure workflow pods, set the workflow pod security context . You should configure the controller with the correct workflow executor for your trade off between security and scalability. These settings can be set by default using workflow defaults . Argo Server Security \u00b6 Argo Server implements security in three layers. Firstly, you should enable transport layer security to ensure your data cannot be read in transit. Secondly, you should enable an authentication mode to ensure that you do not run workflows from unknown users. Finally, you should configure the argo-server role and role binding with the correct permissions. Read-Only \u00b6 You can achieve this by configuring the argo-server role ( example with only read access (i.e. only get / list / watch verbs)). Network Security \u00b6 Argo Workflows requires various levels of network access depending on configuration and the features enabled. The following describes the different workflow components and their network access needs, to help provide guidance on how to configure the argo namespace in a secure manner (e.g. NetworkPolicy ). Argo Server \u00b6 The Argo Server is commonly exposed to end-users to provide users with a UI for visualizing and managing their workflows. It must also be exposed if leveraging webhooks to trigger workflows. Both of these use cases require that the argo-server Service to be exposed for ingress traffic (e.g. with an Ingress object or load balancer). Note that the Argo UI is also available to be accessed by running the server locally (i.e. argo server ) using local KUBECONFIG credentials, and visiting the UI over https://localhost:2746 . The Argo Server additionally has a feature to allow downloading of artifacts through the UI. This feature requires that the argo-server be given egress access to the underlying artifact provider (e.g. S3, GCS, MinIO, Artifactory, Azure Blob Storage) in order to download and stream the artifact. Workflow Controller \u00b6 The workflow-controller Deployment exposes a Prometheus metrics endpoint (workflow-controller-metrics:9090) so that a Prometheus server can periodically scrape for controller level metrics. Since Prometheus is typically running in a separate namespace, the argo namespace should be configured to allow cross-namespace ingress access to the workflow-controller-metrics Service. Database access \u00b6 A persistent store can be configured for either archiving or offloading workflows. If either of these features are enabled, both the workflow-controller and argo-server Deployments will need egress network access to the external database used for archiving/offloading.","title":"Security"},{"location":"security/#security","text":"To report security issues . \ud83d\udca1 Read Practical Argo Workflows Hardening .","title":"Security"},{"location":"security/#workflow-controller-security","text":"This has three parts.","title":"Workflow Controller Security"},{"location":"security/#controller-permissions","text":"The controller has permission (via Kubernetes RBAC + its config map) with either all namespaces (cluster-scope install) or a single managed namespace (namespace-install), notably: List/get/update workflows, and cron-workflows. Create/get/delete pods, PVCs, and PDBs. List/get template, config maps, service accounts, and secrets. See workflow-controller-cluster-role.yaml or workflow-controller-role.yaml","title":"Controller Permissions"},{"location":"security/#user-permissions","text":"Users minimally need permission to create/read workflows. The controller will then create workflow pods (config maps etc) on behalf of the users, even if the user does not have permission to do this themselves. The controller will only create workflow pods in the workflow's namespace. A way to think of this is that, if the user has permission to create a workflow in a namespace, then it is OK to create pods or anything else for them in that namespace. If the user only has permission to create workflows, then they will be typically unable to configure other necessary resources such as config maps, or view the outcome of their workflow. This is useful when the user is a service. Warning If you allow users to create workflows in the controller's namespace (typically argo ), it may be possible for users to modify the controller itself. In a namespace-install the managed namespace should therefore not be the controller's namespace. You can typically further restrict what a user can do to just being able to submit workflows from templates using the workflow restrictions feature .","title":"User Permissions"},{"location":"security/#ui-access","text":"If you want a user to have read-only access to the entirety of the Argo UI for their namespace, a sample role for them may look like: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : ui-user-read-only rules : # k8s standard APIs - apiGroups : - \"\" resources : - events - pods - pods/log verbs : - get - list - watch # Argo APIs. See also https://github.com/argoproj/argo-workflows/blob/main/manifests/cluster-install/workflow-controller-rbac/workflow-aggregate-roles.yaml#L4 - apiGroups : - argoproj.io resources : - eventsources - sensors - workflows - workfloweventbindings - workflowtemplates - clusterworkflowtemplates - cronworkflows - cronworkflows - workflowtaskresults verbs : - get - list - watch","title":"UI Access"},{"location":"security/#workflow-pod-permissions","text":"Workflow pods run using either: The default service account. The service account declared in the workflow spec. There is no restriction on which service account in a namespace may be used. This service account typically needs permissions . Different service accounts should be used if a workflow pod needs to have elevated permissions, e.g. to create other resources. The main container will have the service account token mounted, allowing the main container to patch pods (among other permissions). Set automountServiceAccountToken to false to prevent this. See fields . By default, workflows pods run as root . To further secure workflow pods, set the workflow pod security context . You should configure the controller with the correct workflow executor for your trade off between security and scalability. These settings can be set by default using workflow defaults .","title":"Workflow Pod Permissions"},{"location":"security/#argo-server-security","text":"Argo Server implements security in three layers. Firstly, you should enable transport layer security to ensure your data cannot be read in transit. Secondly, you should enable an authentication mode to ensure that you do not run workflows from unknown users. Finally, you should configure the argo-server role and role binding with the correct permissions.","title":"Argo Server Security"},{"location":"security/#read-only","text":"You can achieve this by configuring the argo-server role ( example with only read access (i.e. only get / list / watch verbs)).","title":"Read-Only"},{"location":"security/#network-security","text":"Argo Workflows requires various levels of network access depending on configuration and the features enabled. The following describes the different workflow components and their network access needs, to help provide guidance on how to configure the argo namespace in a secure manner (e.g. NetworkPolicy ).","title":"Network Security"},{"location":"security/#argo-server","text":"The Argo Server is commonly exposed to end-users to provide users with a UI for visualizing and managing their workflows. It must also be exposed if leveraging webhooks to trigger workflows. Both of these use cases require that the argo-server Service to be exposed for ingress traffic (e.g. with an Ingress object or load balancer). Note that the Argo UI is also available to be accessed by running the server locally (i.e. argo server ) using local KUBECONFIG credentials, and visiting the UI over https://localhost:2746 . The Argo Server additionally has a feature to allow downloading of artifacts through the UI. This feature requires that the argo-server be given egress access to the underlying artifact provider (e.g. S3, GCS, MinIO, Artifactory, Azure Blob Storage) in order to download and stream the artifact.","title":"Argo Server"},{"location":"security/#workflow-controller","text":"The workflow-controller Deployment exposes a Prometheus metrics endpoint (workflow-controller-metrics:9090) so that a Prometheus server can periodically scrape for controller level metrics. Since Prometheus is typically running in a separate namespace, the argo namespace should be configured to allow cross-namespace ingress access to the workflow-controller-metrics Service.","title":"Workflow Controller"},{"location":"security/#database-access","text":"A persistent store can be configured for either archiving or offloading workflows. If either of these features are enabled, both the workflow-controller and argo-server Deployments will need egress network access to the external database used for archiving/offloading.","title":"Database access"},{"location":"service-accounts/","text":"Service Accounts \u00b6 Configure the service account to run Workflows \u00b6 Roles, Role-Bindings, and Service Accounts \u00b6 In order for Argo to support features such as artifacts, outputs, access to secrets, etc. it needs to communicate with Kubernetes resources using the Kubernetes API. To communicate with the Kubernetes API, Argo uses a ServiceAccount to authenticate itself to the Kubernetes API. You can specify which Role (i.e. which permissions) the ServiceAccount that Argo uses by binding a Role to a ServiceAccount using a RoleBinding Then, when submitting Workflows you can specify which ServiceAccount Argo uses using: argo submit --serviceaccount When no ServiceAccount is provided, Argo will use the default ServiceAccount from the namespace from which it is run, which will almost always have insufficient privileges by default. For more information about granting Argo the necessary permissions for your use case see Workflow RBAC . Granting admin privileges \u00b6 For the purposes of this demo, we will grant the default ServiceAccount admin privileges (i.e., we will bind the admin Role to the default ServiceAccount of the current namespace): kubectl create rolebinding default-admin --clusterrole = admin --serviceaccount = argo:default -n argo Note that this will grant admin privileges to the default ServiceAccount in the namespace that the command is run from, so you will only be able to run Workflows in the namespace where the RoleBinding was made.","title":"Service Accounts"},{"location":"service-accounts/#service-accounts","text":"","title":"Service Accounts"},{"location":"service-accounts/#configure-the-service-account-to-run-workflows","text":"","title":"Configure the service account to run Workflows"},{"location":"service-accounts/#roles-role-bindings-and-service-accounts","text":"In order for Argo to support features such as artifacts, outputs, access to secrets, etc. it needs to communicate with Kubernetes resources using the Kubernetes API. To communicate with the Kubernetes API, Argo uses a ServiceAccount to authenticate itself to the Kubernetes API. You can specify which Role (i.e. which permissions) the ServiceAccount that Argo uses by binding a Role to a ServiceAccount using a RoleBinding Then, when submitting Workflows you can specify which ServiceAccount Argo uses using: argo submit --serviceaccount When no ServiceAccount is provided, Argo will use the default ServiceAccount from the namespace from which it is run, which will almost always have insufficient privileges by default. For more information about granting Argo the necessary permissions for your use case see Workflow RBAC .","title":"Roles, Role-Bindings, and Service Accounts"},{"location":"service-accounts/#granting-admin-privileges","text":"For the purposes of this demo, we will grant the default ServiceAccount admin privileges (i.e., we will bind the admin Role to the default ServiceAccount of the current namespace): kubectl create rolebinding default-admin --clusterrole = admin --serviceaccount = argo:default -n argo Note that this will grant admin privileges to the default ServiceAccount in the namespace that the command is run from, so you will only be able to run Workflows in the namespace where the RoleBinding was made.","title":"Granting admin privileges"},{"location":"sidecar-injection/","text":"Sidecar Injection \u00b6 Automatic (i.e. mutating webhook based) sidecar injection systems, including service meshes such as Anthos and Istio Proxy, create a unique problem for Kubernetes workloads that run to completion. Because sidecars are injected outside of the view of the workflow controller, the controller has no awareness of them. It has no opportunity to rewrite the containers command (when using the Emissary Executor) and as the sidecar's process will run as PID 1, which is protected. It can be impossible for the wait container to terminate the sidecar. You will minimize problems by not using Istio with Argo Workflows. See #1282 . Support Matrix \u00b6 Key: Unsupported - this executor is no longer supported Any - we can kill any image KubectlExec - we kill images by running kubectl exec Executor Sidecar Injected Sidecar docker Any Unsupported emissary Any KubectlExec k8sapi Shell KubectlExec kubelet Shell KubectlExec pns Any Any How We Kill Sidecars Using kubectl exec \u00b6 v3.1 and after Kubernetes does not provide a way to kill a single container. You can delete a pod, but this kills all containers, and loses all information and logs of that pod. Instead, try to mimic the Kubernetes termination behavior, which is: SIGTERM PID 1 Wait for the pod's terminateGracePeriodSeconds (30s by default). SIGKILL PID 1 The following are not supported: preStop STOPSIGNAL To do this, it must be possible to run a kubectl exec command that kills the injected sidecar. By default it runs /bin/sh -c 'kill 1' . This can fail: No /bin/sh . Process is not running as PID 1 (which is becoming the default these days due to runAsNonRoot ). Process does not correctly respond to kill 1 (e.g. some shell script weirdness). You can override the kill command by using a pod annotation (where %d is the signal number), for example: spec : podMetadata : annotations : workflows.argoproj.io/kill-cmd-istio-proxy : '[\"pilot-agent\", \"request\", \"POST\", \"quitquitquit\"]' workflows.argoproj.io/kill-cmd-vault-agent : '[\"sh\", \"-c\", \"kill -%d 1\"]' workflows.argoproj.io/kill-cmd-sidecar : '[\"sh\", \"-c\", \"kill -%d $(pidof entrypoint.sh)\"]'","title":"Sidecar Injection"},{"location":"sidecar-injection/#sidecar-injection","text":"Automatic (i.e. mutating webhook based) sidecar injection systems, including service meshes such as Anthos and Istio Proxy, create a unique problem for Kubernetes workloads that run to completion. Because sidecars are injected outside of the view of the workflow controller, the controller has no awareness of them. It has no opportunity to rewrite the containers command (when using the Emissary Executor) and as the sidecar's process will run as PID 1, which is protected. It can be impossible for the wait container to terminate the sidecar. You will minimize problems by not using Istio with Argo Workflows. See #1282 .","title":"Sidecar Injection"},{"location":"sidecar-injection/#support-matrix","text":"Key: Unsupported - this executor is no longer supported Any - we can kill any image KubectlExec - we kill images by running kubectl exec Executor Sidecar Injected Sidecar docker Any Unsupported emissary Any KubectlExec k8sapi Shell KubectlExec kubelet Shell KubectlExec pns Any Any","title":"Support Matrix"},{"location":"sidecar-injection/#how-we-kill-sidecars-using-kubectl-exec","text":"v3.1 and after Kubernetes does not provide a way to kill a single container. You can delete a pod, but this kills all containers, and loses all information and logs of that pod. Instead, try to mimic the Kubernetes termination behavior, which is: SIGTERM PID 1 Wait for the pod's terminateGracePeriodSeconds (30s by default). SIGKILL PID 1 The following are not supported: preStop STOPSIGNAL To do this, it must be possible to run a kubectl exec command that kills the injected sidecar. By default it runs /bin/sh -c 'kill 1' . This can fail: No /bin/sh . Process is not running as PID 1 (which is becoming the default these days due to runAsNonRoot ). Process does not correctly respond to kill 1 (e.g. some shell script weirdness). You can override the kill command by using a pod annotation (where %d is the signal number), for example: spec : podMetadata : annotations : workflows.argoproj.io/kill-cmd-istio-proxy : '[\"pilot-agent\", \"request\", \"POST\", \"quitquitquit\"]' workflows.argoproj.io/kill-cmd-vault-agent : '[\"sh\", \"-c\", \"kill -%d 1\"]' workflows.argoproj.io/kill-cmd-sidecar : '[\"sh\", \"-c\", \"kill -%d $(pidof entrypoint.sh)\"]'","title":"How We Kill Sidecars Using kubectl exec"},{"location":"static-code-analysis/","text":"Static Code Analysis \u00b6 We use the following static code analysis tools: golangci-lint and eslint for compile time linting. Snyk for dependency and image scanning (SCA). These are at least run daily or on each pull request.","title":"Static Code Analysis"},{"location":"static-code-analysis/#static-code-analysis","text":"We use the following static code analysis tools: golangci-lint and eslint for compile time linting. Snyk for dependency and image scanning (SCA). These are at least run daily or on each pull request.","title":"Static Code Analysis"},{"location":"stress-testing/","text":"Stress Testing \u00b6 Install gcloud binary. # Login to GCP: gloud auth login # Set-up your config (if needed): gcloud config set project alex-sb # Create a cluster (default region is us-west-2, if you're not in west of the USA, you might want at different region): gcloud container clusters create-auto argo-workflows-stress-1 # Get credentials: gcloud container clusters get-credentials argo-workflows-stress-1 # Install workflows (If this fails, try running it again): make start PROFILE = stress # Make sure pods are running: kubectl get deployments # Run a test workflow: argo submit examples/hello-world.yaml --watch Checks Open http://localhost:2746/workflows and check it loads and that you can run a workflow. Open http://localhost:9090/metrics and check you can see the Prometheus metrics. Open http://localhost:9091/graph and check you can see a Prometheus graph. You can use this Tab Auto Refresh Chrome extension to auto-refresh the page. Open http://localhost:6060/debug/pprof and check you can access pprof . Run go run ./test/stress/tool -n 10000 to run a large number of workflows. Check Prometheus: See how many Kubernetes API requests are being made. You will see about one Update workflows per reconciliation, multiple Create pods . You should expect to see one Get workflowtemplates per workflow (done on first reconciliation). Otherwise, if you see anything else, that might be a problem. How many errors were logged? log_messages{level=\"error\"} What was the cause? Check PProf to see if there any any hot spots: go tool pprof -png http://localhost:6060/debug/pprof/allocs go tool pprof -png http://localhost:6060/debug/pprof/heap go tool pprof -png http://localhost:6060/debug/pprof/profile Clean-up \u00b6 gcloud container clusters delete argo-workflows-stress-1","title":"Stress Testing"},{"location":"stress-testing/#stress-testing","text":"Install gcloud binary. # Login to GCP: gloud auth login # Set-up your config (if needed): gcloud config set project alex-sb # Create a cluster (default region is us-west-2, if you're not in west of the USA, you might want at different region): gcloud container clusters create-auto argo-workflows-stress-1 # Get credentials: gcloud container clusters get-credentials argo-workflows-stress-1 # Install workflows (If this fails, try running it again): make start PROFILE = stress # Make sure pods are running: kubectl get deployments # Run a test workflow: argo submit examples/hello-world.yaml --watch Checks Open http://localhost:2746/workflows and check it loads and that you can run a workflow. Open http://localhost:9090/metrics and check you can see the Prometheus metrics. Open http://localhost:9091/graph and check you can see a Prometheus graph. You can use this Tab Auto Refresh Chrome extension to auto-refresh the page. Open http://localhost:6060/debug/pprof and check you can access pprof . Run go run ./test/stress/tool -n 10000 to run a large number of workflows. Check Prometheus: See how many Kubernetes API requests are being made. You will see about one Update workflows per reconciliation, multiple Create pods . You should expect to see one Get workflowtemplates per workflow (done on first reconciliation). Otherwise, if you see anything else, that might be a problem. How many errors were logged? log_messages{level=\"error\"} What was the cause? Check PProf to see if there any any hot spots: go tool pprof -png http://localhost:6060/debug/pprof/allocs go tool pprof -png http://localhost:6060/debug/pprof/heap go tool pprof -png http://localhost:6060/debug/pprof/profile","title":"Stress Testing"},{"location":"stress-testing/#clean-up","text":"gcloud container clusters delete argo-workflows-stress-1","title":"Clean-up"},{"location":"survey-data-privacy/","text":"Survey Data Privacy \u00b6 Privacy policy","title":"Survey Data Privacy"},{"location":"survey-data-privacy/#survey-data-privacy","text":"Privacy policy","title":"Survey Data Privacy"},{"location":"suspend-template/","text":"Suspend Template \u00b6 v2.1 See Suspending .","title":"Suspend Template"},{"location":"suspend-template/#suspend-template","text":"v2.1 See Suspending .","title":"Suspend Template"},{"location":"swagger/","text":"API Reference \u00b6 SwaggerUI window.onload = () => { window.ui = SwaggerUIBundle({ url: \"https://raw.githubusercontent.com/argoproj/argo-workflows/main/api/openapi-spec/swagger.json\", dom_id: \"#swagger-ui\", }); };","title":"API Reference"},{"location":"swagger/#api-reference","text":"SwaggerUI window.onload = () => { window.ui = SwaggerUIBundle({ url: \"https://raw.githubusercontent.com/argoproj/argo-workflows/main/api/openapi-spec/swagger.json\", dom_id: \"#swagger-ui\", }); };","title":"API Reference"},{"location":"synchronization/","text":"Synchronization \u00b6 v2.10 and after Introduction \u00b6 Synchronization enables users to limit the parallel execution of certain workflows or templates within a workflow without having to restrict others. Users can create multiple synchronization configurations in the ConfigMap that can be referred to from a workflow or template within a workflow. Alternatively, users can configure a mutex to prevent concurrent execution of templates or workflows using the same mutex. For example: apiVersion : v1 kind : ConfigMap metadata : name : my-config data : workflow : \"1\" # Only one workflow can run at given time in particular namespace template : \"2\" # Two instances of template can run at a given time in particular namespace Workflow-level Synchronization \u00b6 Workflow-level synchronization limits parallel execution of the workflow if workflows have the same synchronization reference. In this example, Workflow refers to workflow synchronization key which is configured as limit 1, so only one workflow instance will be executed at given time even multiple workflows created. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : semaphore : configMapKeyRef : name : my-config key : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : mutex : name : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Template-level Synchronization \u00b6 Template-level synchronization limits parallel execution of the template across workflows, if templates have the same synchronization reference. In this example, acquire-lock template has synchronization reference of template key which is configured as limit 2, so two instances of templates will be executed at a given time: even multiple steps/tasks within workflow or different workflows referring to the same template. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : semaphore : configMapKeyRef : name : my-config key : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : mutex : name : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Examples: Workflow level semaphore Workflow level mutex Step level semaphore Step level mutex Other Parallelism support \u00b6 In addition to this synchronization, the workflow controller supports a parallelism setting that applies to all workflows in the system (it is not granular to a class of workflows, or tasks withing them). Furthermore, there is a parallelism setting at the workflow and template level, but this only restricts total concurrent executions of tasks within the same workflow.","title":"Synchronization"},{"location":"synchronization/#synchronization","text":"v2.10 and after","title":"Synchronization"},{"location":"synchronization/#introduction","text":"Synchronization enables users to limit the parallel execution of certain workflows or templates within a workflow without having to restrict others. Users can create multiple synchronization configurations in the ConfigMap that can be referred to from a workflow or template within a workflow. Alternatively, users can configure a mutex to prevent concurrent execution of templates or workflows using the same mutex. For example: apiVersion : v1 kind : ConfigMap metadata : name : my-config data : workflow : \"1\" # Only one workflow can run at given time in particular namespace template : \"2\" # Two instances of template can run at a given time in particular namespace","title":"Introduction"},{"location":"synchronization/#workflow-level-synchronization","text":"Workflow-level synchronization limits parallel execution of the workflow if workflows have the same synchronization reference. In this example, Workflow refers to workflow synchronization key which is configured as limit 1, so only one workflow instance will be executed at given time even multiple workflows created. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : semaphore : configMapKeyRef : name : my-config key : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-wf-level- spec : entrypoint : whalesay synchronization : mutex : name : workflow templates : - name : whalesay container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"hello world\" ]","title":"Workflow-level Synchronization"},{"location":"synchronization/#template-level-synchronization","text":"Template-level synchronization limits parallel execution of the template across workflows, if templates have the same synchronization reference. In this example, acquire-lock template has synchronization reference of template key which is configured as limit 2, so two instances of templates will be executed at a given time: even multiple steps/tasks within workflow or different workflows referring to the same template. Using a semaphore configured by a ConfigMap : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : semaphore : configMapKeyRef : name : my-config key : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Using a mutex: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : synchronization-tmpl-level- spec : entrypoint : synchronization-tmpl-level-example templates : - name : synchronization-tmpl-level-example steps : - - name : synchronization-acquire-lock template : acquire-lock arguments : parameters : - name : seconds value : \"{{item}}\" withParam : '[\"1\",\"2\",\"3\",\"4\",\"5\"]' - name : acquire-lock synchronization : mutex : name : template container : image : alpine:latest command : [ sh , -c ] args : [ \"sleep 10; echo acquired lock\" ] Examples: Workflow level semaphore Workflow level mutex Step level semaphore Step level mutex","title":"Template-level Synchronization"},{"location":"synchronization/#other-parallelism-support","text":"In addition to this synchronization, the workflow controller supports a parallelism setting that applies to all workflows in the system (it is not granular to a class of workflows, or tasks withing them). Furthermore, there is a parallelism setting at the workflow and template level, but this only restricts total concurrent executions of tasks within the same workflow.","title":"Other Parallelism support"},{"location":"template-defaults/","text":"Template Defaults \u00b6 v3.1 and after Introduction \u00b6 TemplateDefaults feature enables the user to configure the default template values in workflow spec level that will apply to all the templates in the workflow. If the template has a value that also has a default value in templateDefault , the Template's value will take precedence. These values will be applied during the runtime. Template values and default values are merged using Kubernetes strategic merge patch. To check whether and how list values are merged, inspect the patchStrategy and patchMergeKey tags in the workflow definition . Configuring templateDefaults in WorkflowSpec \u00b6 For example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : template-defaults-example spec : entrypoint : main templateDefaults : timeout : 30s # timeout value will be applied to all templates retryStrategy : # retryStrategy value will be applied to all templates limit : 2 templates : - name : main container : image : docker/whalesay:latest template defaults example Configuring templateDefaults in Controller Level \u00b6 Operator can configure the templateDefaults in workflow defaults . This templateDefault will be applied to all the workflow which runs on the controller. The following would be specified in the Config Map: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 templateDefaults: timeout: 30s","title":"Template Defaults"},{"location":"template-defaults/#template-defaults","text":"v3.1 and after","title":"Template Defaults"},{"location":"template-defaults/#introduction","text":"TemplateDefaults feature enables the user to configure the default template values in workflow spec level that will apply to all the templates in the workflow. If the template has a value that also has a default value in templateDefault , the Template's value will take precedence. These values will be applied during the runtime. Template values and default values are merged using Kubernetes strategic merge patch. To check whether and how list values are merged, inspect the patchStrategy and patchMergeKey tags in the workflow definition .","title":"Introduction"},{"location":"template-defaults/#configuring-templatedefaults-in-workflowspec","text":"For example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : template-defaults-example spec : entrypoint : main templateDefaults : timeout : 30s # timeout value will be applied to all templates retryStrategy : # retryStrategy value will be applied to all templates limit : 2 templates : - name : main container : image : docker/whalesay:latest template defaults example","title":"Configuring templateDefaults in WorkflowSpec"},{"location":"template-defaults/#configuring-templatedefaults-in-controller-level","text":"Operator can configure the templateDefaults in workflow defaults . This templateDefault will be applied to all the workflow which runs on the controller. The following would be specified in the Config Map: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level workflowDefaults : | metadata: annotations: argo: workflows labels: foo: bar spec: ttlStrategy: secondsAfterSuccess: 5 templateDefaults: timeout: 30s","title":"Configuring templateDefaults in Controller Level"},{"location":"tls/","text":"Transport Layer Security \u00b6 v2.8 and after If you're running Argo Server you have three options with increasing transport security (note - you should also be running authentication ): Default configuration \u00b6 v2.8 - 2.12 Defaults to Plain Text v3.0 and after Defaults to Encrypted if cert is available Argo image/deployment defaults to Encrypted with a self-signed certificate which expires after 365 days. Plain Text \u00b6 Recommended for: development. Everything is sent in plain text. Start Argo Server with the --secure=false (or ARGO_SECURE=false ) flag, e.g.: export ARGO_SECURE = false argo server --secure = false To secure the UI you may front it with a HTTPS proxy. Encrypted \u00b6 Recommended for: development and test environments. You can encrypt connections without any real effort. Start Argo Server with the --secure flag, e.g.: argo server --secure It will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) and --insecure-skip-verify (or ARGO_INSECURE_SKIP_VERIFY=true ). argo --secure --insecure-skip-verify list export ARGO_SECURE = true export ARGO_INSECURE_SKIP_VERIFY = true argo --secure --insecure-skip-verify list Tip: Don't forget to update your readiness probe to use HTTPS. To do so, edit your argo-server Deployment's readinessProbe spec: readinessProbe : httpGet : scheme : HTTPS Encrypted and Verified \u00b6 Recommended for: production environments. Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates (this is out of scope of this documentation). Start Argo Server with the --secure flag, e.g.: argo server --secure As before, it will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) only. argo --secure list export ARGO_SECURE = true argo list TLS Min Version \u00b6 Set TLS_MIN_VERSION to be the minimum TLS version to use. This is v1.2 by default. This must be one of these int values . Version Value v1.0 769 v1.1 770 v1.2 771 v1.3 772","title":"Transport Layer Security"},{"location":"tls/#transport-layer-security","text":"v2.8 and after If you're running Argo Server you have three options with increasing transport security (note - you should also be running authentication ):","title":"Transport Layer Security"},{"location":"tls/#default-configuration","text":"v2.8 - 2.12 Defaults to Plain Text v3.0 and after Defaults to Encrypted if cert is available Argo image/deployment defaults to Encrypted with a self-signed certificate which expires after 365 days.","title":"Default configuration"},{"location":"tls/#plain-text","text":"Recommended for: development. Everything is sent in plain text. Start Argo Server with the --secure=false (or ARGO_SECURE=false ) flag, e.g.: export ARGO_SECURE = false argo server --secure = false To secure the UI you may front it with a HTTPS proxy.","title":"Plain Text"},{"location":"tls/#encrypted","text":"Recommended for: development and test environments. You can encrypt connections without any real effort. Start Argo Server with the --secure flag, e.g.: argo server --secure It will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) and --insecure-skip-verify (or ARGO_INSECURE_SKIP_VERIFY=true ). argo --secure --insecure-skip-verify list export ARGO_SECURE = true export ARGO_INSECURE_SKIP_VERIFY = true argo --secure --insecure-skip-verify list Tip: Don't forget to update your readiness probe to use HTTPS. To do so, edit your argo-server Deployment's readinessProbe spec: readinessProbe : httpGet : scheme : HTTPS","title":"Encrypted"},{"location":"tls/#encrypted-and-verified","text":"Recommended for: production environments. Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates (this is out of scope of this documentation). Start Argo Server with the --secure flag, e.g.: argo server --secure As before, it will start with a self-signed certificate that expires after 365 days. Run the CLI with --secure (or ARGO_SECURE=true ) only. argo --secure list export ARGO_SECURE = true argo list","title":"Encrypted and Verified"},{"location":"tls/#tls-min-version","text":"Set TLS_MIN_VERSION to be the minimum TLS version to use. This is v1.2 by default. This must be one of these int values . Version Value v1.0 769 v1.1 770 v1.2 771 v1.3 772","title":"TLS Min Version"},{"location":"tolerating-pod-deletion/","text":"Tolerating Pod Deletion \u00b6 v2.12 and after In Kubernetes, pods are cattle and can be deleted at any time. Deletion could be manually via kubectl delete pod , during a node drain, or for other reasons. This can be very inconvenient, your workflow will error, but for reasons outside of your control. A pod disruption budget can reduce the likelihood of this happening. But, it cannot entirely prevent it. To retry pods that were deleted, set retryStrategy.retryPolicy: OnError . This can be set at a workflow-level, template-level, or globally (using workflow defaults ) Example \u00b6 Run the following workflow (which will sleep for 30s): apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : example spec : retryStrategy : retryPolicy : OnError limit : 1 entrypoint : main templates : - name : main container : image : docker/whalesay:latest command : - sleep - 30s Then execute kubectl delete pod example . You'll see that the errored node is automatically retried. \ud83d\udca1 Read more on architecting workflows for reliability .","title":"Tolerating Pod Deletion"},{"location":"tolerating-pod-deletion/#tolerating-pod-deletion","text":"v2.12 and after In Kubernetes, pods are cattle and can be deleted at any time. Deletion could be manually via kubectl delete pod , during a node drain, or for other reasons. This can be very inconvenient, your workflow will error, but for reasons outside of your control. A pod disruption budget can reduce the likelihood of this happening. But, it cannot entirely prevent it. To retry pods that were deleted, set retryStrategy.retryPolicy: OnError . This can be set at a workflow-level, template-level, or globally (using workflow defaults )","title":"Tolerating Pod Deletion"},{"location":"tolerating-pod-deletion/#example","text":"Run the following workflow (which will sleep for 30s): apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : example spec : retryStrategy : retryPolicy : OnError limit : 1 entrypoint : main templates : - name : main container : image : docker/whalesay:latest command : - sleep - 30s Then execute kubectl delete pod example . You'll see that the errored node is automatically retried. \ud83d\udca1 Read more on architecting workflows for reliability .","title":"Example"},{"location":"training/","text":"Training \u00b6 Videos \u00b6 We also have a YouTube playlist of videos that includes workshops you can follow along with: Open the playlist Hands-On \u00b6 We've created a Killercoda course featuring beginner and intermediate lessons . These allow to you try out Argo Workflows in your web browser without needing to install anything on your computer. Each lesson starts up a Kubernetes cluster that you can access via a web browser. Additional resources \u00b6 Visit the awesome-argo GitHub repo for more educational resources.","title":"Training"},{"location":"training/#training","text":"","title":"Training"},{"location":"training/#videos","text":"We also have a YouTube playlist of videos that includes workshops you can follow along with: Open the playlist","title":"Videos"},{"location":"training/#hands-on","text":"We've created a Killercoda course featuring beginner and intermediate lessons . These allow to you try out Argo Workflows in your web browser without needing to install anything on your computer. Each lesson starts up a Kubernetes cluster that you can access via a web browser.","title":"Hands-On"},{"location":"training/#additional-resources","text":"Visit the awesome-argo GitHub repo for more educational resources.","title":"Additional resources"},{"location":"upgrading/","text":"Upgrading Guide \u00b6 Breaking changes typically (sometimes we don't realise they are breaking) have \"!\" in the commit message, as per the conventional commits . Upgrading to v3.5 \u00b6 There are no known breaking changes in this release. Please file an issue if you encounter any unexpected problems after upgrading. Upgrading to v3.4 \u00b6 Non-Emissary executors are removed. ( #7829 ) \u00b6 Emissary executor is now the only supported executor. If you are using other executors, e.g. docker, k8sapi, pns, and kubelet, you need to remove your containerRuntimeExecutors and containerRuntimeExecutor from your controller's configmap. If you have workflows that use different executors with the label workflows.argoproj.io/container-runtime-executor , this is no longer supported and will not be effective. chore!: Remove dataflow pipelines from codebase. (#9071) \u00b6 You are affected if you are using dataflow pipelines in the UI or via the /pipelines endpoint. We no longer support dataflow pipelines and all relevant code has been removed. feat!: Add entrypoint lookup. Fixes #8344 \u00b6 Affected if: Using the Emissary executor. Used the args field for any entry in images . This PR automatically looks up the command and entrypoint. The implementation for config look-up was incorrect (it allowed you to specify args but not entrypoint ). args has been removed to correct the behaviour. If you are incorrectly configured, the workflow controller will error on start-up. Actions \u00b6 You don't need to configure images that use v2 manifests anymore. You can just remove them (e.g. argoproj/argosay:v2): % docker manifest inspect argoproj/argosay:v2 ... \"schemaVersion\" : 2 , ... For v1 manifests (e.g. docker/whalesay:latest): % docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' docker/whalesay:latest [] [ /bin/bash ] images : docker/whalesay:latest : cmd : [ /bin/bash ] feat: Fail on invalid config. (#8295) \u00b6 The workflow controller will error on start-up if incorrectly configured, rather than silently ignoring mis-configuration. Failed to register watch for controller config map: error unmarshaling JSON: while decoding JSON: json: unknown field \\\"args\\\" feat: add indexes for improve archived workflow performance. (#8860) \u00b6 This PR adds indexes to archived workflow tables. This change may cause a long time to upgrade if the user has a large table. feat: enhance artifact visualization (#8655) \u00b6 For AWS users using S3: visualizing artifacts in the UI and downloading them now requires an additional \"Action\" to be configured in your S3 bucket policy: \"ListBucket\". Upgrading to v3.3 \u00b6 662a7295b feat: Replace patch pod with create workflowtaskresult . Fixes #3961 (#8000) \u00b6 The PR changes the permissions that can be used by a workflow to remove the pod patch permission. See workflow RBAC and #8013 . 06d4bf76f fix: Reduce agent permissions. Fixes #7986 (#7987) \u00b6 The PR changes the permissions used by the agent to report back the outcome of HTTP template requests. The permission patch workflowtasksets/status replaces patch workflowtasksets , for example: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : agent rules : - apiGroups : - argoproj.io resources : - workflowtasksets/status verbs : - patch Workflows running during any upgrade should be give both permissions. See #8013 . feat!: Remove deprecated config flags \u00b6 This PR removes the following configmap items - executorImage (use executor.image in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executorImage : argoproj/argocli:latest ... From now and onwards, only provide the executor image in workflow controller as a command argument as shown below: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executor : | image: argoproj/argocli:latest ... executorImagePullPolicy (use executor.imagePullPolicy in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorImagePullPolicy : IfNotPresent ... Change it as shown below: data : ... executor : | imagePullPolicy: IfNotPresent ... executorResources (use executor.resources in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorResources : requests : cpu : 0.1 memory : 64Mi limits : cpu : 0.5 memory : 512Mi ... Change it as shown below: data : ... executor : | resources: requests: cpu: 0.1 memory: 64Mi limits: cpu: 0.5 memory: 512Mi ... fce82d572 feat: Remove pod workers (#7837) \u00b6 This PR removes pod workers from the code, the pod informer directly writes into the workflow queue. As a result the --pod-workers flag has been removed. 93c11a24ff feat: Add TLS to Metrics and Telemetry servers (#7041) \u00b6 This PR adds the ability to send metrics over TLS with a self-signed certificate. In v3.5 this will be enabled by default, so it is recommended that users enable this functionality now. 0758eab11 feat(server)!: Sync dispatch of webhook events by default \u00b6 This is not expected to impact users. Events dispatch in the Argo Server has been change from async to sync by default. This is so that errors are surfaced to the client, rather than only appearing as logs or Kubernetes events. It is possible that response times under load are too long for your client and you may prefer to revert this behaviour. To revert this behaviour, restart Argo Server with ARGO_EVENT_ASYNC_DISPATCH=true . Make sure that asyncDispatch=true is logged. bd49c6303 fix(artifact)!: default https to any URL missing a scheme. Fixes #6973 \u00b6 HTTPArtifact without a scheme will now defaults to https instead of http user need to explicitly include a http prefix if they want to retrieve HTTPArtifact through http chore!: Remove the hidden flag --verify from argo submit \u00b6 The hidden flag --verify has been removed from argo submit . This is a internal testing flag we don't need anymore. Upgrading to v3.2 \u00b6 e5b131a33 feat: Add template node to pod name. Fixes #1319 (#6712) \u00b6 This add the template name to the pod name, to make it easier to understand which pod ran which step. This behaviour can be reverted by setting POD_NAMES=v1 on the workflow controller. be63efe89 feat(executor)!: Change argoexec base image to alpine. Closes #5720 (#6006) \u00b6 Changing from Debian to Alpine reduces the size of the argoexec image, resulting is faster starting workflow pods, and it also reduce the risk of security issues. There is not such thing as a free lunch. There maybe other behaviour changes we don't know of yet. Some users found this change prevented workflow with very large parameters from running. See #7586 48d7ad3 chore: Remove onExit naming transition scaffolding code (#6297) \u00b6 When upgrading from v3.2 workflows that are running at the time of the upgrade and have onExit steps may experience the onExit step running twice. This is only applicable for workflows that began running before a workflow-controller upgrade and are still running after the upgrade is complete. This is only applicable for upgrading from v2.12 or earlier directly to v3.2 or later. Even under these conditions, duplicate work may not be experienced. Upgrading to v3.1 \u00b6 3fff791e4 build!: Automatically add manifests to v* tags (#5880) \u00b6 The manifests in the repository on the tag will no longer contain the image tag, instead they will contain :latest . You must not get your manifests from the Git repository, you must get them from the release notes. You must not use the stable tag. This is defunct, and will be removed in v3.1. ab361667a feat(controller) Emissary executor. (#4925) \u00b6 The Emissary executor is not a breaking change per-se, but it is brand new so we would not recommend you use it by default yet. Instead, we recommend you test it out on some workflows using a workflow-controller-configmap configuration . # Specifies the executor to use. # # You can use this to: # * Tailor your executor based on your preference for security or performance. # * Test out an executor without committing yourself to use it for every workflow. # # To find out which executor was actually use, see the `wait` container logs. # # The list is in order of precedence; the first matching executor is used. # This has precedence over `containerRuntimeExecutor`. containerRuntimeExecutors : | - name: emissary selector: matchLabels: workflows.argoproj.io/container-runtime-executor: emissary be63efe89 feat(controller): Expression template tags. Resolves #4548 & #1293 (#5115) \u00b6 This PR introduced a new expression syntax know as \"expression tag template\". A user has reported that this does not always play nicely with the when condition syntax (Goevaluate). This can be resolved using a single quote in your when expression: when : \"'{{inputs.parameters.should-print}}' != '2021-01-01'\" Learn more Upgrading to v3.0 \u00b6 defbd600e fix: Default ARGO_SECURE=true. Fixes #5607 (#5626) \u00b6 The server now starts with TLS enabled by default if a key is available. The original behaviour can be configured with --secure=false . If you have an ingress, you may need to add the appropriate annotations:(varies by ingress): alb.ingress.kubernetes.io/backend-protocol : HTTPS nginx.ingress.kubernetes.io/backend-protocol : HTTPS 01d310235 chore(server)!: Required authentication by default. Resolves #5206 (#5211) \u00b6 To login to the user interface, you must provide a login token. The original behaviour can be configured with --auth-mode=server . f31e0c6f9 chore!: Remove deprecated fields (#5035) \u00b6 Some fields that were deprecated in early 2020 have been removed. Field Action template.template and template.templateRef The workflow spec must be changed to use steps or DAG, otherwise the workflow will error. spec.ttlSecondsAfterFinished change to spec.ttlStrategy.secondsAfterCompletion , otherwise the workflow will not be garbage collected as expected. To find impacted workflows: kubectl get wf --all-namespaces -o yaml | grep templateRef kubectl get wf --all-namespaces -o yaml | grep ttlSecondsAfterFinished c8215f972 feat(controller)!: Key-only artifacts. Fixes #3184 (#4618) \u00b6 This change is not breaking per-se, but many users do not appear to aware of artifact repository ref , so check your usage of that feature if you have problems.","title":"Upgrading Guide"},{"location":"upgrading/#upgrading-guide","text":"Breaking changes typically (sometimes we don't realise they are breaking) have \"!\" in the commit message, as per the conventional commits .","title":"Upgrading Guide"},{"location":"upgrading/#upgrading-to-v35","text":"There are no known breaking changes in this release. Please file an issue if you encounter any unexpected problems after upgrading.","title":"Upgrading to v3.5"},{"location":"upgrading/#upgrading-to-v34","text":"","title":"Upgrading to v3.4"},{"location":"upgrading/#non-emissary-executors-are-removed-7829","text":"Emissary executor is now the only supported executor. If you are using other executors, e.g. docker, k8sapi, pns, and kubelet, you need to remove your containerRuntimeExecutors and containerRuntimeExecutor from your controller's configmap. If you have workflows that use different executors with the label workflows.argoproj.io/container-runtime-executor , this is no longer supported and will not be effective.","title":"Non-Emissary executors are removed. (#7829)"},{"location":"upgrading/#chore-remove-dataflow-pipelines-from-codebase-9071","text":"You are affected if you are using dataflow pipelines in the UI or via the /pipelines endpoint. We no longer support dataflow pipelines and all relevant code has been removed.","title":"chore!: Remove dataflow pipelines from codebase. (#9071)"},{"location":"upgrading/#feat-add-entrypoint-lookup-fixes-8344","text":"Affected if: Using the Emissary executor. Used the args field for any entry in images . This PR automatically looks up the command and entrypoint. The implementation for config look-up was incorrect (it allowed you to specify args but not entrypoint ). args has been removed to correct the behaviour. If you are incorrectly configured, the workflow controller will error on start-up.","title":"feat!: Add entrypoint lookup. Fixes #8344"},{"location":"upgrading/#actions","text":"You don't need to configure images that use v2 manifests anymore. You can just remove them (e.g. argoproj/argosay:v2): % docker manifest inspect argoproj/argosay:v2 ... \"schemaVersion\" : 2 , ... For v1 manifests (e.g. docker/whalesay:latest): % docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' docker/whalesay:latest [] [ /bin/bash ] images : docker/whalesay:latest : cmd : [ /bin/bash ]","title":"Actions"},{"location":"upgrading/#feat-fail-on-invalid-config-8295","text":"The workflow controller will error on start-up if incorrectly configured, rather than silently ignoring mis-configuration. Failed to register watch for controller config map: error unmarshaling JSON: while decoding JSON: json: unknown field \\\"args\\\"","title":"feat: Fail on invalid config. (#8295)"},{"location":"upgrading/#feat-add-indexes-for-improve-archived-workflow-performance-8860","text":"This PR adds indexes to archived workflow tables. This change may cause a long time to upgrade if the user has a large table.","title":"feat: add indexes for improve archived workflow performance. (#8860)"},{"location":"upgrading/#feat-enhance-artifact-visualization-8655","text":"For AWS users using S3: visualizing artifacts in the UI and downloading them now requires an additional \"Action\" to be configured in your S3 bucket policy: \"ListBucket\".","title":"feat: enhance artifact visualization (#8655)"},{"location":"upgrading/#upgrading-to-v33","text":"","title":"Upgrading to v3.3"},{"location":"upgrading/#662a7295b-feat-replace-patch-pod-with-create-workflowtaskresult-fixes-3961-8000","text":"The PR changes the permissions that can be used by a workflow to remove the pod patch permission. See workflow RBAC and #8013 .","title":"662a7295b feat: Replace patch pod with create workflowtaskresult. Fixes #3961 (#8000)"},{"location":"upgrading/#06d4bf76f-fix-reduce-agent-permissions-fixes-7986-7987","text":"The PR changes the permissions used by the agent to report back the outcome of HTTP template requests. The permission patch workflowtasksets/status replaces patch workflowtasksets , for example: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : agent rules : - apiGroups : - argoproj.io resources : - workflowtasksets/status verbs : - patch Workflows running during any upgrade should be give both permissions. See #8013 .","title":"06d4bf76f fix: Reduce agent permissions. Fixes #7986 (#7987)"},{"location":"upgrading/#feat-remove-deprecated-config-flags","text":"This PR removes the following configmap items - executorImage (use executor.image in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executorImage : argoproj/argocli:latest ... From now and onwards, only provide the executor image in workflow controller as a command argument as shown below: apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : ... executor : | image: argoproj/argocli:latest ... executorImagePullPolicy (use executor.imagePullPolicy in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorImagePullPolicy : IfNotPresent ... Change it as shown below: data : ... executor : | imagePullPolicy: IfNotPresent ... executorResources (use executor.resources in configmap instead) e.g. Workflow controller configmap similar to the following one given below won't be valid anymore: data : ... executorResources : requests : cpu : 0.1 memory : 64Mi limits : cpu : 0.5 memory : 512Mi ... Change it as shown below: data : ... executor : | resources: requests: cpu: 0.1 memory: 64Mi limits: cpu: 0.5 memory: 512Mi ...","title":"feat!: Remove deprecated config flags"},{"location":"upgrading/#fce82d572-feat-remove-pod-workers-7837","text":"This PR removes pod workers from the code, the pod informer directly writes into the workflow queue. As a result the --pod-workers flag has been removed.","title":"fce82d572 feat: Remove pod workers (#7837)"},{"location":"upgrading/#93c11a24ff-feat-add-tls-to-metrics-and-telemetry-servers-7041","text":"This PR adds the ability to send metrics over TLS with a self-signed certificate. In v3.5 this will be enabled by default, so it is recommended that users enable this functionality now.","title":"93c11a24ff feat: Add TLS to Metrics and Telemetry servers (#7041)"},{"location":"upgrading/#0758eab11-featserver-sync-dispatch-of-webhook-events-by-default","text":"This is not expected to impact users. Events dispatch in the Argo Server has been change from async to sync by default. This is so that errors are surfaced to the client, rather than only appearing as logs or Kubernetes events. It is possible that response times under load are too long for your client and you may prefer to revert this behaviour. To revert this behaviour, restart Argo Server with ARGO_EVENT_ASYNC_DISPATCH=true . Make sure that asyncDispatch=true is logged.","title":"0758eab11 feat(server)!: Sync dispatch of webhook events by default"},{"location":"upgrading/#bd49c6303-fixartifact-default-https-to-any-url-missing-a-scheme-fixes-6973","text":"HTTPArtifact without a scheme will now defaults to https instead of http user need to explicitly include a http prefix if they want to retrieve HTTPArtifact through http","title":"bd49c6303 fix(artifact)!: default https to any URL missing a scheme. Fixes #6973"},{"location":"upgrading/#chore-remove-the-hidden-flag-verify-from-argo-submit","text":"The hidden flag --verify has been removed from argo submit . This is a internal testing flag we don't need anymore.","title":"chore!: Remove the hidden flag --verify from argo submit"},{"location":"upgrading/#upgrading-to-v32","text":"","title":"Upgrading to v3.2"},{"location":"upgrading/#e5b131a33-feat-add-template-node-to-pod-name-fixes-1319-6712","text":"This add the template name to the pod name, to make it easier to understand which pod ran which step. This behaviour can be reverted by setting POD_NAMES=v1 on the workflow controller.","title":"e5b131a33 feat: Add template node to pod name. Fixes #1319 (#6712)"},{"location":"upgrading/#be63efe89-featexecutor-change-argoexec-base-image-to-alpine-closes-5720-6006","text":"Changing from Debian to Alpine reduces the size of the argoexec image, resulting is faster starting workflow pods, and it also reduce the risk of security issues. There is not such thing as a free lunch. There maybe other behaviour changes we don't know of yet. Some users found this change prevented workflow with very large parameters from running. See #7586","title":"be63efe89 feat(executor)!: Change argoexec base image to alpine. Closes #5720 (#6006)"},{"location":"upgrading/#48d7ad3-chore-remove-onexit-naming-transition-scaffolding-code-6297","text":"When upgrading from v3.2 workflows that are running at the time of the upgrade and have onExit steps may experience the onExit step running twice. This is only applicable for workflows that began running before a workflow-controller upgrade and are still running after the upgrade is complete. This is only applicable for upgrading from v2.12 or earlier directly to v3.2 or later. Even under these conditions, duplicate work may not be experienced.","title":"48d7ad3 chore: Remove onExit naming transition scaffolding code (#6297)"},{"location":"upgrading/#upgrading-to-v31","text":"","title":"Upgrading to v3.1"},{"location":"upgrading/#3fff791e4-build-automatically-add-manifests-to-v-tags-5880","text":"The manifests in the repository on the tag will no longer contain the image tag, instead they will contain :latest . You must not get your manifests from the Git repository, you must get them from the release notes. You must not use the stable tag. This is defunct, and will be removed in v3.1.","title":"3fff791e4 build!: Automatically add manifests to v* tags (#5880)"},{"location":"upgrading/#ab361667a-featcontroller-emissary-executor-4925","text":"The Emissary executor is not a breaking change per-se, but it is brand new so we would not recommend you use it by default yet. Instead, we recommend you test it out on some workflows using a workflow-controller-configmap configuration . # Specifies the executor to use. # # You can use this to: # * Tailor your executor based on your preference for security or performance. # * Test out an executor without committing yourself to use it for every workflow. # # To find out which executor was actually use, see the `wait` container logs. # # The list is in order of precedence; the first matching executor is used. # This has precedence over `containerRuntimeExecutor`. containerRuntimeExecutors : | - name: emissary selector: matchLabels: workflows.argoproj.io/container-runtime-executor: emissary","title":"ab361667a feat(controller) Emissary executor. (#4925)"},{"location":"upgrading/#be63efe89-featcontroller-expression-template-tags-resolves-4548-1293-5115","text":"This PR introduced a new expression syntax know as \"expression tag template\". A user has reported that this does not always play nicely with the when condition syntax (Goevaluate). This can be resolved using a single quote in your when expression: when : \"'{{inputs.parameters.should-print}}' != '2021-01-01'\" Learn more","title":"be63efe89 feat(controller): Expression template tags. Resolves #4548 & #1293 (#5115)"},{"location":"upgrading/#upgrading-to-v30","text":"","title":"Upgrading to v3.0"},{"location":"upgrading/#defbd600e-fix-default-argo_securetrue-fixes-5607-5626","text":"The server now starts with TLS enabled by default if a key is available. The original behaviour can be configured with --secure=false . If you have an ingress, you may need to add the appropriate annotations:(varies by ingress): alb.ingress.kubernetes.io/backend-protocol : HTTPS nginx.ingress.kubernetes.io/backend-protocol : HTTPS","title":"defbd600e fix: Default ARGO_SECURE=true. Fixes #5607 (#5626)"},{"location":"upgrading/#01d310235-choreserver-required-authentication-by-default-resolves-5206-5211","text":"To login to the user interface, you must provide a login token. The original behaviour can be configured with --auth-mode=server .","title":"01d310235 chore(server)!: Required authentication by default. Resolves #5206 (#5211)"},{"location":"upgrading/#f31e0c6f9-chore-remove-deprecated-fields-5035","text":"Some fields that were deprecated in early 2020 have been removed. Field Action template.template and template.templateRef The workflow spec must be changed to use steps or DAG, otherwise the workflow will error. spec.ttlSecondsAfterFinished change to spec.ttlStrategy.secondsAfterCompletion , otherwise the workflow will not be garbage collected as expected. To find impacted workflows: kubectl get wf --all-namespaces -o yaml | grep templateRef kubectl get wf --all-namespaces -o yaml | grep ttlSecondsAfterFinished","title":"f31e0c6f9 chore!: Remove deprecated fields (#5035)"},{"location":"upgrading/#c8215f972-featcontroller-key-only-artifacts-fixes-3184-4618","text":"This change is not breaking per-se, but many users do not appear to aware of artifact repository ref , so check your usage of that feature if you have problems.","title":"c8215f972 feat(controller)!: Key-only artifacts. Fixes #3184 (#4618)"},{"location":"variables/","text":"Workflow Variables \u00b6 Some fields in a workflow specification allow for variable references which are automatically substituted by Argo. How to use variables \u00b6 Variables are enclosed in curly braces: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The following variables are made available to reference various meta-data of a workflow: Template Tag Kinds \u00b6 There are two kinds of template tag: simple The default, e.g. {{workflow.name}} expression Where {{ is immediately followed by = , e.g. {{=workflow.name}} . Simple \u00b6 The tag is substituted with the variable that has a name the same as the tag. Simple tags may have white-space between the brackets and variable as seen below. However, there is a known issue where variables may fail to interpolate with white-space, so it is recommended to avoid using white-space until this issue is resolved. Please report unexpected behavior with reproducible examples. args : [ \"{{ inputs.parameters.message }}\" ] Expression \u00b6 Since v3.1 The tag is substituted with the result of evaluating the tag as an expression. Note that any hyphenated parameter names or step names will cause a parsing error. You can reference them by indexing into the parameter or step map, e.g. inputs.parameters['my-param'] or steps['my-step'].outputs.result . Learn about the expression syntax . Examples \u00b6 Plain list: [1, 2] Filter a list: filter([1, 2], { # > 1}) Map a list: map([1, 2], { # * 2 }) We provide some core functions: Cast to int: asInt(inputs.parameters['my-int-param']) Cast to float: asFloat(inputs.parameters['my-float-param']) Cast to string: string(1) Convert to a JSON string (needed for withParam ): toJson([1, 2]) Extract data from JSON: jsonpath(inputs.parameters.json, '$.some.path') You can also use Sprig functions : Trim a string: sprig.trim(inputs.parameters['my-string-param']) Sprig error handling Sprig functions often do not raise errors. For example, if int is used on an invalid value, it returns 0 . Please review the Sprig documentation to understand which functions raise errors and which do not. Reference \u00b6 All Templates \u00b6 Variable Description inputs.parameters. Input parameter to a template inputs.parameters All input parameters to a template as a JSON string inputs.artifacts. Input artifact to a template node.name Full name of the node Steps Templates \u00b6 Variable Description steps.name Name of the step steps..id unique id of container step steps..ip IP address of a previous daemon container step steps..status Phase status of any previous step steps..exitCode Exit code of any previous script or container step steps..startedAt Time-stamp when the step started steps..finishedAt Time-stamp when the step finished steps..hostNodeName Host node where task ran (available from version 3.5) steps..outputs.result Output result of any previous container or script step steps..outputs.parameters When the previous step uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation steps..outputs.parameters. Output parameter of any previous step. When the previous step uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation steps..outputs.artifacts. Output artifact of any previous step DAG Templates \u00b6 Variable Description tasks.name Name of the task tasks..id unique id of container task tasks..ip IP address of a previous daemon container task tasks..status Phase status of any previous task tasks..exitCode Exit code of any previous script or container task tasks..startedAt Time-stamp when the task started tasks..finishedAt Time-stamp when the task finished tasks..hostNodeName Host node where task ran (available from version 3.5) tasks..outputs.result Output result of any previous container or script task tasks..outputs.parameters When the previous task uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation tasks..outputs.parameters. Output parameter of any previous task. When the previous task uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation tasks..outputs.artifacts. Output artifact of any previous task HTTP Templates \u00b6 Since v3.3 Only available for successCondition Variable Description request.method Request method ( string ) request.url Request URL ( string ) request.body Request body ( string ) request.headers Request headers ( map[string][]string ) response.statusCode Response status code ( int ) response.body Response body ( string ) response.headers Response headers ( map[string][]string ) RetryStrategy \u00b6 When using the expression field within retryStrategy , special variables are available. Variable Description lastRetry.exitCode Exit code of the last retry lastRetry.status Status of the last retry lastRetry.duration Duration in seconds of the last retry lastRetry.message Message output from the last retry (available from version 3.5) Note: These variables evaluate to a string type. If using advanced expressions, either cast them to int values ( expression: \"{{=asInt(lastRetry.exitCode) >= 2}}\" ) or compare them to string values ( expression: \"{{=lastRetry.exitCode != '2'}}\" ). Container/Script Templates \u00b6 Variable Description pod.name Pod name of the container/script retries The retry number of the container/script if retryStrategy is specified inputs.artifacts..path Local path of the input artifact outputs.artifacts..path Local path of the output artifact outputs.parameters..path Local path of the output parameter Loops ( withItems / withParam ) \u00b6 Variable Description item Value of the item in a list item. Field value of the item in a list of maps Metrics \u00b6 When emitting custom metrics in a template , special variables are available that allow self-reference to the current step. Variable Description status Phase status of the metric-emitting template duration Duration of the metric-emitting template in seconds (only applicable in Template -level metrics, for Workflow -level use workflow.duration ) exitCode Exit code of the metric-emitting template inputs.parameters. Input parameter of the metric-emitting template outputs.parameters. Output parameter of the metric-emitting template outputs.result Output result of the metric-emitting template resourcesDuration.{cpu,memory} Resources duration in seconds . Must be one of resourcesDuration.cpu or resourcesDuration.memory , if available. For more info, see the Resource Duration doc. Real-Time Metrics \u00b6 Some variables can be emitted in real-time (as opposed to just when the step/task completes). To emit these variables in real time, set realtime: true under gauge (note: only Gauge metrics allow for real time variable emission). Metrics currently available for real time emission: For Workflow -level metrics: workflow.duration For Template -level metrics: duration Global \u00b6 Variable Description workflow.name Workflow name workflow.namespace Workflow namespace workflow.mainEntrypoint Workflow's initial entrypoint workflow.serviceAccountName Workflow service account name workflow.uid Workflow UID. Useful for setting ownership reference to a resource, or a unique artifact location workflow.parameters. Input parameter to the workflow workflow.parameters All input parameters to the workflow as a JSON string (this is deprecated in favor of workflow.parameters.json as this doesn't work with expression tags and that does) workflow.parameters.json All input parameters to the workflow as a JSON string workflow.outputs.parameters. Global parameter in the workflow workflow.outputs.artifacts. Global artifact in the workflow workflow.annotations. Workflow annotations workflow.annotations.json all Workflow annotations as a JSON string workflow.labels. Workflow labels workflow.labels.json all Workflow labels as a JSON string workflow.creationTimestamp Workflow creation time-stamp formatted in RFC 3339 (e.g. 2018-08-23T05:42:49Z ) workflow.creationTimestamp. Creation time-stamp formatted with a strftime format character. workflow.creationTimestamp.RFC3339 Creation time-stamp formatted with in RFC 3339. workflow.priority Workflow priority workflow.duration Workflow duration estimate in seconds, may differ from actual duration by a couple of seconds workflow.scheduledTime Scheduled runtime formatted in RFC 3339 (only available for CronWorkflow ) Exit Handler \u00b6 Variable Description workflow.status Workflow status. One of: Succeeded , Failed , Error workflow.failures A list of JSON objects containing information about nodes that failed or errored during execution. Available fields: displayName , message , templateName , phase , podName , and finishedAt . Knowing where you are \u00b6 The idea with creating a WorkflowTemplate is that they are reusable bits of code you will use in many actual Workflows. Sometimes it is useful to know which workflow you are part of. workflow.mainEntrypoint is one way you can do this. If each of your actual workflows has a differing entrypoint, you can identify the workflow you're part of. Given this use in a WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : say-main-entrypoint spec : entrypoint : echo templates : - name : echo container : image : alpine command : [ echo ] args : [ \"{{workflow.mainEntrypoint}}\" ] I can distinguish my caller: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : foo- spec : entrypoint : foo templates : - name : foo steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of foo apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : bar- spec : entrypoint : bar templates : - name : bar steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of bar This shouldn't be that helpful in logging, you should be able to identify workflows through other labels in your cluster's log tool, but can be helpful when generating metrics for the workflow for example.","title":"Workflow Variables"},{"location":"variables/#workflow-variables","text":"Some fields in a workflow specification allow for variable references which are automatically substituted by Argo.","title":"Workflow Variables"},{"location":"variables/#how-to-use-variables","text":"Variables are enclosed in curly braces: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The following variables are made available to reference various meta-data of a workflow:","title":"How to use variables"},{"location":"variables/#template-tag-kinds","text":"There are two kinds of template tag: simple The default, e.g. {{workflow.name}} expression Where {{ is immediately followed by = , e.g. {{=workflow.name}} .","title":"Template Tag Kinds"},{"location":"variables/#simple","text":"The tag is substituted with the variable that has a name the same as the tag. Simple tags may have white-space between the brackets and variable as seen below. However, there is a known issue where variables may fail to interpolate with white-space, so it is recommended to avoid using white-space until this issue is resolved. Please report unexpected behavior with reproducible examples. args : [ \"{{ inputs.parameters.message }}\" ]","title":"Simple"},{"location":"variables/#expression","text":"Since v3.1 The tag is substituted with the result of evaluating the tag as an expression. Note that any hyphenated parameter names or step names will cause a parsing error. You can reference them by indexing into the parameter or step map, e.g. inputs.parameters['my-param'] or steps['my-step'].outputs.result . Learn about the expression syntax .","title":"Expression"},{"location":"variables/#examples","text":"Plain list: [1, 2] Filter a list: filter([1, 2], { # > 1}) Map a list: map([1, 2], { # * 2 }) We provide some core functions: Cast to int: asInt(inputs.parameters['my-int-param']) Cast to float: asFloat(inputs.parameters['my-float-param']) Cast to string: string(1) Convert to a JSON string (needed for withParam ): toJson([1, 2]) Extract data from JSON: jsonpath(inputs.parameters.json, '$.some.path') You can also use Sprig functions : Trim a string: sprig.trim(inputs.parameters['my-string-param']) Sprig error handling Sprig functions often do not raise errors. For example, if int is used on an invalid value, it returns 0 . Please review the Sprig documentation to understand which functions raise errors and which do not.","title":"Examples"},{"location":"variables/#reference","text":"","title":"Reference"},{"location":"variables/#all-templates","text":"Variable Description inputs.parameters. Input parameter to a template inputs.parameters All input parameters to a template as a JSON string inputs.artifacts. Input artifact to a template node.name Full name of the node","title":"All Templates"},{"location":"variables/#steps-templates","text":"Variable Description steps.name Name of the step steps..id unique id of container step steps..ip IP address of a previous daemon container step steps..status Phase status of any previous step steps..exitCode Exit code of any previous script or container step steps..startedAt Time-stamp when the step started steps..finishedAt Time-stamp when the step finished steps..hostNodeName Host node where task ran (available from version 3.5) steps..outputs.result Output result of any previous container or script step steps..outputs.parameters When the previous step uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation steps..outputs.parameters. Output parameter of any previous step. When the previous step uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation steps..outputs.artifacts. Output artifact of any previous step","title":"Steps Templates"},{"location":"variables/#dag-templates","text":"Variable Description tasks.name Name of the task tasks..id unique id of container task tasks..ip IP address of a previous daemon container task tasks..status Phase status of any previous task tasks..exitCode Exit code of any previous script or container task tasks..startedAt Time-stamp when the task started tasks..finishedAt Time-stamp when the task finished tasks..hostNodeName Host node where task ran (available from version 3.5) tasks..outputs.result Output result of any previous container or script task tasks..outputs.parameters When the previous task uses withItems or withParams , this contains a JSON array of the output parameter maps of each invocation tasks..outputs.parameters. Output parameter of any previous task. When the previous task uses withItems or withParams , this contains a JSON array of the output parameter values of each invocation tasks..outputs.artifacts. Output artifact of any previous task","title":"DAG Templates"},{"location":"variables/#http-templates","text":"Since v3.3 Only available for successCondition Variable Description request.method Request method ( string ) request.url Request URL ( string ) request.body Request body ( string ) request.headers Request headers ( map[string][]string ) response.statusCode Response status code ( int ) response.body Response body ( string ) response.headers Response headers ( map[string][]string )","title":"HTTP Templates"},{"location":"variables/#retrystrategy","text":"When using the expression field within retryStrategy , special variables are available. Variable Description lastRetry.exitCode Exit code of the last retry lastRetry.status Status of the last retry lastRetry.duration Duration in seconds of the last retry lastRetry.message Message output from the last retry (available from version 3.5) Note: These variables evaluate to a string type. If using advanced expressions, either cast them to int values ( expression: \"{{=asInt(lastRetry.exitCode) >= 2}}\" ) or compare them to string values ( expression: \"{{=lastRetry.exitCode != '2'}}\" ).","title":"RetryStrategy"},{"location":"variables/#containerscript-templates","text":"Variable Description pod.name Pod name of the container/script retries The retry number of the container/script if retryStrategy is specified inputs.artifacts..path Local path of the input artifact outputs.artifacts..path Local path of the output artifact outputs.parameters..path Local path of the output parameter","title":"Container/Script Templates"},{"location":"variables/#loops-withitems-withparam","text":"Variable Description item Value of the item in a list item. Field value of the item in a list of maps","title":"Loops (withItems / withParam)"},{"location":"variables/#metrics","text":"When emitting custom metrics in a template , special variables are available that allow self-reference to the current step. Variable Description status Phase status of the metric-emitting template duration Duration of the metric-emitting template in seconds (only applicable in Template -level metrics, for Workflow -level use workflow.duration ) exitCode Exit code of the metric-emitting template inputs.parameters. Input parameter of the metric-emitting template outputs.parameters. Output parameter of the metric-emitting template outputs.result Output result of the metric-emitting template resourcesDuration.{cpu,memory} Resources duration in seconds . Must be one of resourcesDuration.cpu or resourcesDuration.memory , if available. For more info, see the Resource Duration doc.","title":"Metrics"},{"location":"variables/#real-time-metrics","text":"Some variables can be emitted in real-time (as opposed to just when the step/task completes). To emit these variables in real time, set realtime: true under gauge (note: only Gauge metrics allow for real time variable emission). Metrics currently available for real time emission: For Workflow -level metrics: workflow.duration For Template -level metrics: duration","title":"Real-Time Metrics"},{"location":"variables/#global","text":"Variable Description workflow.name Workflow name workflow.namespace Workflow namespace workflow.mainEntrypoint Workflow's initial entrypoint workflow.serviceAccountName Workflow service account name workflow.uid Workflow UID. Useful for setting ownership reference to a resource, or a unique artifact location workflow.parameters. Input parameter to the workflow workflow.parameters All input parameters to the workflow as a JSON string (this is deprecated in favor of workflow.parameters.json as this doesn't work with expression tags and that does) workflow.parameters.json All input parameters to the workflow as a JSON string workflow.outputs.parameters. Global parameter in the workflow workflow.outputs.artifacts. Global artifact in the workflow workflow.annotations. Workflow annotations workflow.annotations.json all Workflow annotations as a JSON string workflow.labels. Workflow labels workflow.labels.json all Workflow labels as a JSON string workflow.creationTimestamp Workflow creation time-stamp formatted in RFC 3339 (e.g. 2018-08-23T05:42:49Z ) workflow.creationTimestamp. Creation time-stamp formatted with a strftime format character. workflow.creationTimestamp.RFC3339 Creation time-stamp formatted with in RFC 3339. workflow.priority Workflow priority workflow.duration Workflow duration estimate in seconds, may differ from actual duration by a couple of seconds workflow.scheduledTime Scheduled runtime formatted in RFC 3339 (only available for CronWorkflow )","title":"Global"},{"location":"variables/#exit-handler","text":"Variable Description workflow.status Workflow status. One of: Succeeded , Failed , Error workflow.failures A list of JSON objects containing information about nodes that failed or errored during execution. Available fields: displayName , message , templateName , phase , podName , and finishedAt .","title":"Exit Handler"},{"location":"variables/#knowing-where-you-are","text":"The idea with creating a WorkflowTemplate is that they are reusable bits of code you will use in many actual Workflows. Sometimes it is useful to know which workflow you are part of. workflow.mainEntrypoint is one way you can do this. If each of your actual workflows has a differing entrypoint, you can identify the workflow you're part of. Given this use in a WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : say-main-entrypoint spec : entrypoint : echo templates : - name : echo container : image : alpine command : [ echo ] args : [ \"{{workflow.mainEntrypoint}}\" ] I can distinguish my caller: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : foo- spec : entrypoint : foo templates : - name : foo steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of foo apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : bar- spec : entrypoint : bar templates : - name : bar steps : - - name : step templateRef : name : say-main-entrypoint template : echo results in a log of bar This shouldn't be that helpful in logging, you should be able to identify workflows through other labels in your cluster's log tool, but can be helpful when generating metrics for the workflow for example.","title":"Knowing where you are"},{"location":"webhooks/","text":"Webhooks \u00b6 v2.11 and after Many clients can send events via the events API endpoint using a standard authorization header. However, for clients that are unable to do so (e.g. because they use signature verification as proof of origin), additional configuration is required. In the namespace that will receive the event, create access token resources for your client: A role with permissions to get workflow templates and to create a workflow: example A service account for the client: example . A binding of the account to the role: example Additionally create: A secret named argo-workflows-webhook-clients listing the service accounts: example The secret argo-workflows-webhook-clients tells Argo: What type of webhook the account can be used for, e.g. github . What \"secret\" that webhook is configured for, e.g. in your Github settings page.","title":"Webhooks"},{"location":"webhooks/#webhooks","text":"v2.11 and after Many clients can send events via the events API endpoint using a standard authorization header. However, for clients that are unable to do so (e.g. because they use signature verification as proof of origin), additional configuration is required. In the namespace that will receive the event, create access token resources for your client: A role with permissions to get workflow templates and to create a workflow: example A service account for the client: example . A binding of the account to the role: example Additionally create: A secret named argo-workflows-webhook-clients listing the service accounts: example The secret argo-workflows-webhook-clients tells Argo: What type of webhook the account can be used for, e.g. github . What \"secret\" that webhook is configured for, e.g. in your Github settings page.","title":"Webhooks"},{"location":"widgets/","text":"Widgets \u00b6 v3.0 and after Widgets are intended to be embedded into other applications using inline frames ( iframe ). This may not work with your configuration. You may need to: Run the Argo Server with an account that can read workflows. That can be done using --auth-mode=server and configuring the argo-server service account. Run the Argo Server with --x-frame-options=SAMEORIGIN or --x-frame-options= .","title":"Widgets"},{"location":"widgets/#widgets","text":"v3.0 and after Widgets are intended to be embedded into other applications using inline frames ( iframe ). This may not work with your configuration. You may need to: Run the Argo Server with an account that can read workflows. That can be done using --auth-mode=server and configuring the argo-server service account. Run the Argo Server with --x-frame-options=SAMEORIGIN or --x-frame-options= .","title":"Widgets"},{"location":"windows/","text":"Windows Container Support \u00b6 The Argo server and the workflow controller currently only run on Linux. The workflow executor however also runs on Windows nodes, meaning you can use Windows containers inside your workflows! Here are the steps to get started. Requirements \u00b6 Kubernetes 1.14 or later, supporting Windows nodes Hybrid cluster containing Linux and Windows nodes like described in the Kubernetes docs Argo configured and running like described here Schedule workflows with Windows containers \u00b6 If you're running workflows in your hybrid Kubernetes cluster, always make sure to include a nodeSelector to run the steps on the correct host OS: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-windows- spec : entrypoint : hello-win templates : - name : hello-win nodeSelector : kubernetes.io/os : windows # specify the OS your step should run on container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] You can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-windows.yaml $ argo logs hello-windows-s9kk5 hello-windows-s9kk5: \"Hello from Windows Container!\" Schedule hybrid workflows \u00b6 You can also run different steps on different host operating systems. This can for example be very helpful when you need to compile your application on Windows and Linux. An example workflow can look like the following: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-hybrid- spec : entrypoint : mytemplate templates : - name : mytemplate steps : - - name : step1 template : hello-win - - name : step2 template : hello-linux - name : hello-win nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] - name : hello-linux nodeSelector : kubernetes.io/os : linux container : image : alpine command : [ echo ] args : [ \"Hello from Linux Container!\" ] Again, you can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-hybrid.yaml $ argo logs hello-hybrid-plqpp hello-hybrid-plqpp-1977432187: \"Hello from Windows Container!\" hello-hybrid-plqpp-764774907: Hello from Linux Container! Artifact mount path \u00b6 Artifacts work mostly the same way as on Linux. All paths get automatically mapped to the C: drive. For example: # ... - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at C:\\message - name : message path : \"/message\" # gets mapped to C:\\message nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"dir C:\\\\message\" ] # List the C:\\message directory Remember that volume mounts on Windows can only target a directory in the container, and not an individual file. Limitations \u00b6 Sharing process namespaces doesn't work on Windows so you can't use the Process Namespace Sharing (PNS) workflow executor. The executor Windows container is built using Nano Server as the base image. Running a newer windows version (e.g. 1909) is currently not confirmed to be working . If this is required, you need to build the executor container yourself by first adjusting the base image. Building the workflow executor image for Windows \u00b6 To build the workflow executor image for Windows you need a Windows machine running Windows Server 2019 with Docker installed like described in the docs . You then clone the project and run the Docker build with the Dockerfile for Windows and argoexec as a target: git clone https://github.com/argoproj/argo-workflows.git cd argo docker build -t myargoexec -f . \\D ockerfile.windows --target argoexec .","title":"Windows Container Support"},{"location":"windows/#windows-container-support","text":"The Argo server and the workflow controller currently only run on Linux. The workflow executor however also runs on Windows nodes, meaning you can use Windows containers inside your workflows! Here are the steps to get started.","title":"Windows Container Support"},{"location":"windows/#requirements","text":"Kubernetes 1.14 or later, supporting Windows nodes Hybrid cluster containing Linux and Windows nodes like described in the Kubernetes docs Argo configured and running like described here","title":"Requirements"},{"location":"windows/#schedule-workflows-with-windows-containers","text":"If you're running workflows in your hybrid Kubernetes cluster, always make sure to include a nodeSelector to run the steps on the correct host OS: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-windows- spec : entrypoint : hello-win templates : - name : hello-win nodeSelector : kubernetes.io/os : windows # specify the OS your step should run on container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] You can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-windows.yaml $ argo logs hello-windows-s9kk5 hello-windows-s9kk5: \"Hello from Windows Container!\"","title":"Schedule workflows with Windows containers"},{"location":"windows/#schedule-hybrid-workflows","text":"You can also run different steps on different host operating systems. This can for example be very helpful when you need to compile your application on Windows and Linux. An example workflow can look like the following: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-hybrid- spec : entrypoint : mytemplate templates : - name : mytemplate steps : - - name : step1 template : hello-win - - name : step2 template : hello-linux - name : hello-win nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"echo\" , \"Hello from Windows Container!\" ] - name : hello-linux nodeSelector : kubernetes.io/os : linux container : image : alpine command : [ echo ] args : [ \"Hello from Linux Container!\" ] Again, you can run this example and get the logs: $ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-hybrid.yaml $ argo logs hello-hybrid-plqpp hello-hybrid-plqpp-1977432187: \"Hello from Windows Container!\" hello-hybrid-plqpp-764774907: Hello from Linux Container!","title":"Schedule hybrid workflows"},{"location":"windows/#artifact-mount-path","text":"Artifacts work mostly the same way as on Linux. All paths get automatically mapped to the C: drive. For example: # ... - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at C:\\message - name : message path : \"/message\" # gets mapped to C:\\message nodeSelector : kubernetes.io/os : windows container : image : mcr.microsoft.com/windows/nanoserver:1809 command : [ \"cmd\" , \"/c\" ] args : [ \"dir C:\\\\message\" ] # List the C:\\message directory Remember that volume mounts on Windows can only target a directory in the container, and not an individual file.","title":"Artifact mount path"},{"location":"windows/#limitations","text":"Sharing process namespaces doesn't work on Windows so you can't use the Process Namespace Sharing (PNS) workflow executor. The executor Windows container is built using Nano Server as the base image. Running a newer windows version (e.g. 1909) is currently not confirmed to be working . If this is required, you need to build the executor container yourself by first adjusting the base image.","title":"Limitations"},{"location":"windows/#building-the-workflow-executor-image-for-windows","text":"To build the workflow executor image for Windows you need a Windows machine running Windows Server 2019 with Docker installed like described in the docs . You then clone the project and run the Docker build with the Dockerfile for Windows and argoexec as a target: git clone https://github.com/argoproj/argo-workflows.git cd argo docker build -t myargoexec -f . \\D ockerfile.windows --target argoexec .","title":"Building the workflow executor image for Windows"},{"location":"work-avoidance/","text":"Work Avoidance \u00b6 v2.9 and after You can make workflows faster and more robust by employing work avoidance . A workflow that utilizes this is simply a workflow containing steps that do not run if the work has already been done. This is a technique is similar to memoization . Work avoidance is totally in your control and you make the decisions as to have to skip the work. Memoization is a feature of Argo Workflows to automatically skip steps which generate outputs. Prior to version 3.5 this required outputs to be specified, but you can use memoization for all steps and tasks in version 3.5 or later. This simplest way to do this is to use marker files . Use cases: An expensive step appears across multiple workflows - you want to avoid repeating them. A workflow has unreliable tasks - you want to be able to resubmit the workflow. A marker file is a file that indicates the work has already been done. Before doing the work you check to see if the marker has already been done: if [ -e /work/markers/name-of-task ] ; then echo \"work already done\" exit 0 fi echo \"working very hard\" touch /work/markers/name-of-task Choose a name for the file that is unique for the task, e.g. the template name and all the parameters: touch /work/markers/ $( date +%Y-%m-%d ) -echo- {{ inputs.parameters.num }} You need to store the marker files between workflows and this can be achieved using a PVC and optional input artifact . This complete work avoidance example has the following: A PVC to store the markers on. A load-markers step that loads the marker files from artifact storage. Multiple echo tasks that avoid work using marker files. A save-markers exit handler to save the marker files, even if they are not needed.","title":"Work Avoidance"},{"location":"work-avoidance/#work-avoidance","text":"v2.9 and after You can make workflows faster and more robust by employing work avoidance . A workflow that utilizes this is simply a workflow containing steps that do not run if the work has already been done. This is a technique is similar to memoization . Work avoidance is totally in your control and you make the decisions as to have to skip the work. Memoization is a feature of Argo Workflows to automatically skip steps which generate outputs. Prior to version 3.5 this required outputs to be specified, but you can use memoization for all steps and tasks in version 3.5 or later. This simplest way to do this is to use marker files . Use cases: An expensive step appears across multiple workflows - you want to avoid repeating them. A workflow has unreliable tasks - you want to be able to resubmit the workflow. A marker file is a file that indicates the work has already been done. Before doing the work you check to see if the marker has already been done: if [ -e /work/markers/name-of-task ] ; then echo \"work already done\" exit 0 fi echo \"working very hard\" touch /work/markers/name-of-task Choose a name for the file that is unique for the task, e.g. the template name and all the parameters: touch /work/markers/ $( date +%Y-%m-%d ) -echo- {{ inputs.parameters.num }} You need to store the marker files between workflows and this can be achieved using a PVC and optional input artifact . This complete work avoidance example has the following: A PVC to store the markers on. A load-markers step that loads the marker files from artifact storage. Multiple echo tasks that avoid work using marker files. A save-markers exit handler to save the marker files, even if they are not needed.","title":"Work Avoidance"},{"location":"workflow-archive/","text":"Workflow Archive \u00b6 v2.5 and after If you want to keep completed workflows for a long time, you can use the workflow archive to save them in a Postgres or MySQL (>= 5.7.8) database. The workflow archive stores the status of the workflow, which pods have been executed, what was the result etc. The job logs of the workflow pods will not be archived. If you need to save the logs of the pods, you must setup an artifact repository according to this doc . The quick-start deployment includes a Postgres database server. In this case the workflow archive is already enabled. Such a deployment is convenient for test environments, but in a production environment you must use a production quality database service. Enabling Workflow Archive \u00b6 To enable archiving of the workflows, you must configure database parameters in the persistence section of your configuration and set archive: to true . Example: persistence : archive : true postgresql : host : localhost port : 5432 database : postgres tableName : argo_workflows userNameSecret : name : argo - postgres - config key : username passwordSecret : name : argo - postgres - config key : password You must also create the secret with database user and password in the namespace of the workflow controller. Example: kubectl create secret generic argo-postgres-config -n argo --from-literal=password=mypassword --from-literal=username=argodbuser Note that IAM-based authentication is not currently supported. However, you can start your database proxy as a sidecar (e.g. via CloudSQL Proxy on GCP) and then specify your local proxy address, IAM username, and an empty string as your password in the persistence configuration to connect to it. The following tables will be created in the database when you start the workflow controller with enabled archive: argo_workflows argo_archived_workflows argo_archived_workflows_labels schema_history Automatic Database Migration \u00b6 Every time the Argo workflow-controller starts with persistence enabled, it tries to migrate the database to the correct version. If the database migration fails, the workflow-controller will also fail to start. In this case you can delete all the above tables and restart the workflow-controller. If you know what are you doing you also have an option to skip migration: persistence : skipMigration : true Required database permissions \u00b6 Postgres \u00b6 The database user/role must have CREATE and USAGE permissions on the public schema of the database so that the tables can be created during the migration. Archive TTL \u00b6 You can configure the time period to keep archived workflows before they will be deleted by the archived workflow garbage collection function. The default is forever. Example: persistence : archiveTTL : 10 d The ARCHIVED_WORKFLOW_GC_PERIOD variable defines the periodicity of running the garbage collection function. The default value is documented here . When the workflow controller starts, it sets the ticker to run every ARCHIVED_WORKFLOW_GC_PERIOD . It does not run the garbage collection function immediately and the first garbage collection happens only after the period defined in the ARCHIVED_WORKFLOW_GC_PERIOD variable. Cluster Name \u00b6 Optionally you can set a unique name of your Kubernetes cluster. This name will populate the clustername field in the argo_archived_workflows table. Example: persistence : clusterName : dev - cluster Disabling Workflow Archive \u00b6 To disable archiving of the workflows, set archive: to false in the persistence section of your configuration . Example: persistence : archive : false","title":"Workflow Archive"},{"location":"workflow-archive/#workflow-archive","text":"v2.5 and after If you want to keep completed workflows for a long time, you can use the workflow archive to save them in a Postgres or MySQL (>= 5.7.8) database. The workflow archive stores the status of the workflow, which pods have been executed, what was the result etc. The job logs of the workflow pods will not be archived. If you need to save the logs of the pods, you must setup an artifact repository according to this doc . The quick-start deployment includes a Postgres database server. In this case the workflow archive is already enabled. Such a deployment is convenient for test environments, but in a production environment you must use a production quality database service.","title":"Workflow Archive"},{"location":"workflow-archive/#enabling-workflow-archive","text":"To enable archiving of the workflows, you must configure database parameters in the persistence section of your configuration and set archive: to true . Example: persistence : archive : true postgresql : host : localhost port : 5432 database : postgres tableName : argo_workflows userNameSecret : name : argo - postgres - config key : username passwordSecret : name : argo - postgres - config key : password You must also create the secret with database user and password in the namespace of the workflow controller. Example: kubectl create secret generic argo-postgres-config -n argo --from-literal=password=mypassword --from-literal=username=argodbuser Note that IAM-based authentication is not currently supported. However, you can start your database proxy as a sidecar (e.g. via CloudSQL Proxy on GCP) and then specify your local proxy address, IAM username, and an empty string as your password in the persistence configuration to connect to it. The following tables will be created in the database when you start the workflow controller with enabled archive: argo_workflows argo_archived_workflows argo_archived_workflows_labels schema_history","title":"Enabling Workflow Archive"},{"location":"workflow-archive/#automatic-database-migration","text":"Every time the Argo workflow-controller starts with persistence enabled, it tries to migrate the database to the correct version. If the database migration fails, the workflow-controller will also fail to start. In this case you can delete all the above tables and restart the workflow-controller. If you know what are you doing you also have an option to skip migration: persistence : skipMigration : true","title":"Automatic Database Migration"},{"location":"workflow-archive/#required-database-permissions","text":"","title":"Required database permissions"},{"location":"workflow-archive/#postgres","text":"The database user/role must have CREATE and USAGE permissions on the public schema of the database so that the tables can be created during the migration.","title":"Postgres"},{"location":"workflow-archive/#archive-ttl","text":"You can configure the time period to keep archived workflows before they will be deleted by the archived workflow garbage collection function. The default is forever. Example: persistence : archiveTTL : 10 d The ARCHIVED_WORKFLOW_GC_PERIOD variable defines the periodicity of running the garbage collection function. The default value is documented here . When the workflow controller starts, it sets the ticker to run every ARCHIVED_WORKFLOW_GC_PERIOD . It does not run the garbage collection function immediately and the first garbage collection happens only after the period defined in the ARCHIVED_WORKFLOW_GC_PERIOD variable.","title":"Archive TTL"},{"location":"workflow-archive/#cluster-name","text":"Optionally you can set a unique name of your Kubernetes cluster. This name will populate the clustername field in the argo_archived_workflows table. Example: persistence : clusterName : dev - cluster","title":"Cluster Name"},{"location":"workflow-archive/#disabling-workflow-archive","text":"To disable archiving of the workflows, set archive: to false in the persistence section of your configuration . Example: persistence : archive : false","title":"Disabling Workflow Archive"},{"location":"workflow-concepts/","text":"Core Concepts \u00b6 This page serves as an introduction to the core concepts of Argo. The Workflow \u00b6 The Workflow is the most important resource in Argo and serves two important functions: It defines the workflow to be executed. It stores the state of the workflow. Because of these dual responsibilities, a Workflow should be treated as a \"live\" object. It is not only a static definition, but is also an \"instance\" of said definition. (If it isn't clear what this means, it will be explained below). Workflow Spec \u00b6 The workflow to be executed is defined in the Workflow.spec field. The core structure of a Workflow spec is a list of templates and an entrypoint . templates can be loosely thought of as \"functions\": they define instructions to be executed. The entrypoint field defines what the \"main\" function will be \u2013 that is, the template that will be executed first. Here is an example of a simple Workflow spec with a single template : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world- # Name of this Workflow spec : entrypoint : whalesay # Defines \"whalesay\" as the \"main\" template templates : - name : whalesay # Defining the \"whalesay\" template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] # This template runs \"cowsay\" in the \"whalesay\" image with arguments \"hello world\" template Types \u00b6 There are 6 types of templates, divided into two different categories. Template Definitions \u00b6 These templates define work to be done, usually in a Container. Container \u00b6 Perhaps the most common template type, it will schedule a Container. The spec of the template is the same as the Kubernetes container spec , so you can define a container here the same way you do anywhere else in Kubernetes. Example: - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] Script \u00b6 A convenience wrapper around a container . The spec is the same as a container, but adds the source: field which allows you to define a script in-place. The script will be saved into a file and executed for you. The result of the script is automatically exported into an Argo variable either {{tasks..outputs.result}} or {{steps..outputs.result}} , depending how it was called. Example: - name : gen-random-int script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i) Resource \u00b6 Performs operations on cluster Resources directly. It can be used to get, create, apply, delete, replace, or patch resources on your cluster. This example creates a ConfigMap resource on the cluster: - name : k8s-owner-reference resource : action : create manifest : | apiVersion: v1 kind: ConfigMap metadata: generateName: owned-eg- data: some: value Suspend \u00b6 A suspend template will suspend execution, either for a duration or until it is resumed manually. Suspend templates can be resumed from the CLI (with argo resume ), the API endpoint , or the UI. Example: - name : delay suspend : duration : \"20s\" Template Invocators \u00b6 These templates are used to invoke/call other templates and provide execution control. Steps \u00b6 A steps template allows you to define your tasks in a series of steps. The structure of the template is a \"list of lists\". Outer lists will run sequentially and inner lists will run in parallel. If you want to run inner lists one by one, use the Synchronization feature. You can set a wide array of options to control execution, such as when: clauses to conditionally execute a step . In this example step1 runs first. Once it is completed, step2a and step2b will run in parallel: - name : hello-hello-hello steps : - - name : step1 template : prepare-data - - name : step2a template : run-data-first-half - name : step2b template : run-data-second-half DAG \u00b6 A dag template allows you to define your tasks as a graph of dependencies. In a DAG, you list all your tasks and set which other tasks must complete before a particular task can begin. Tasks without any dependencies will be run immediately. In this example A runs first. Once it is completed, B and C will run in parallel and once they both complete, D will run: - name : diamond dag : tasks : - name : A template : echo - name : B dependencies : [ A ] template : echo - name : C dependencies : [ A ] template : echo - name : D dependencies : [ B , C ] template : echo Architecture \u00b6 If you are interested in Argo's underlying architecture, see Architecture .","title":"Core Concepts"},{"location":"workflow-concepts/#core-concepts","text":"This page serves as an introduction to the core concepts of Argo.","title":"Core Concepts"},{"location":"workflow-concepts/#the-workflow","text":"The Workflow is the most important resource in Argo and serves two important functions: It defines the workflow to be executed. It stores the state of the workflow. Because of these dual responsibilities, a Workflow should be treated as a \"live\" object. It is not only a static definition, but is also an \"instance\" of said definition. (If it isn't clear what this means, it will be explained below).","title":"The Workflow"},{"location":"workflow-concepts/#workflow-spec","text":"The workflow to be executed is defined in the Workflow.spec field. The core structure of a Workflow spec is a list of templates and an entrypoint . templates can be loosely thought of as \"functions\": they define instructions to be executed. The entrypoint field defines what the \"main\" function will be \u2013 that is, the template that will be executed first. Here is an example of a simple Workflow spec with a single template : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world- # Name of this Workflow spec : entrypoint : whalesay # Defines \"whalesay\" as the \"main\" template templates : - name : whalesay # Defining the \"whalesay\" template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] # This template runs \"cowsay\" in the \"whalesay\" image with arguments \"hello world\"","title":"Workflow Spec"},{"location":"workflow-concepts/#template-types","text":"There are 6 types of templates, divided into two different categories.","title":"template Types"},{"location":"workflow-concepts/#template-definitions","text":"These templates define work to be done, usually in a Container.","title":"Template Definitions"},{"location":"workflow-concepts/#container","text":"Perhaps the most common template type, it will schedule a Container. The spec of the template is the same as the Kubernetes container spec , so you can define a container here the same way you do anywhere else in Kubernetes. Example: - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ]","title":"Container"},{"location":"workflow-concepts/#script","text":"A convenience wrapper around a container . The spec is the same as a container, but adds the source: field which allows you to define a script in-place. The script will be saved into a file and executed for you. The result of the script is automatically exported into an Argo variable either {{tasks..outputs.result}} or {{steps..outputs.result}} , depending how it was called. Example: - name : gen-random-int script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i)","title":"Script"},{"location":"workflow-concepts/#resource","text":"Performs operations on cluster Resources directly. It can be used to get, create, apply, delete, replace, or patch resources on your cluster. This example creates a ConfigMap resource on the cluster: - name : k8s-owner-reference resource : action : create manifest : | apiVersion: v1 kind: ConfigMap metadata: generateName: owned-eg- data: some: value","title":"Resource"},{"location":"workflow-concepts/#suspend","text":"A suspend template will suspend execution, either for a duration or until it is resumed manually. Suspend templates can be resumed from the CLI (with argo resume ), the API endpoint , or the UI. Example: - name : delay suspend : duration : \"20s\"","title":"Suspend"},{"location":"workflow-concepts/#template-invocators","text":"These templates are used to invoke/call other templates and provide execution control.","title":"Template Invocators"},{"location":"workflow-concepts/#steps","text":"A steps template allows you to define your tasks in a series of steps. The structure of the template is a \"list of lists\". Outer lists will run sequentially and inner lists will run in parallel. If you want to run inner lists one by one, use the Synchronization feature. You can set a wide array of options to control execution, such as when: clauses to conditionally execute a step . In this example step1 runs first. Once it is completed, step2a and step2b will run in parallel: - name : hello-hello-hello steps : - - name : step1 template : prepare-data - - name : step2a template : run-data-first-half - name : step2b template : run-data-second-half","title":"Steps"},{"location":"workflow-concepts/#dag","text":"A dag template allows you to define your tasks as a graph of dependencies. In a DAG, you list all your tasks and set which other tasks must complete before a particular task can begin. Tasks without any dependencies will be run immediately. In this example A runs first. Once it is completed, B and C will run in parallel and once they both complete, D will run: - name : diamond dag : tasks : - name : A template : echo - name : B dependencies : [ A ] template : echo - name : C dependencies : [ A ] template : echo - name : D dependencies : [ B , C ] template : echo","title":"DAG"},{"location":"workflow-concepts/#architecture","text":"If you are interested in Argo's underlying architecture, see Architecture .","title":"Architecture"},{"location":"workflow-controller-configmap/","text":"Workflow Controller Config Map \u00b6 Introduction \u00b6 The Workflow Controller Config Map is used to set controller-wide settings. For a detailed example, please see workflow-controller-configmap.yaml . Alternate Structure \u00b6 In all versions, the configuration may be under a config: | key: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | instanceID: my-ci-controller artifactRepository: archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey In version 2.7+, the config: | key is optional. However, if the config: | key is not used, all nested maps under top level keys should be strings. This makes it easier to generate the map with some configuration management tools like Kustomize. # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # \"config: |\" key is optional in 2.7+! instanceID : my-ci-controller artifactRepository : | # However, all nested maps must be strings archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey","title":"Workflow Controller Config Map"},{"location":"workflow-controller-configmap/#workflow-controller-config-map","text":"","title":"Workflow Controller Config Map"},{"location":"workflow-controller-configmap/#introduction","text":"The Workflow Controller Config Map is used to set controller-wide settings. For a detailed example, please see workflow-controller-configmap.yaml .","title":"Introduction"},{"location":"workflow-controller-configmap/#alternate-structure","text":"In all versions, the configuration may be under a config: | key: # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : config : | instanceID: my-ci-controller artifactRepository: archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey In version 2.7+, the config: | key is optional. However, if the config: | key is not used, all nested maps under top level keys should be strings. This makes it easier to generate the map with some configuration management tools like Kustomize. # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : # \"config: |\" key is optional in 2.7+! instanceID : my-ci-controller artifactRepository : | # However, all nested maps must be strings archiveLogs: true s3: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 insecure: false accessKeySecret: name: my-s3-credentials key: accessKey secretKeySecret: name: my-s3-credentials key: secretKey","title":"Alternate Structure"},{"location":"workflow-creator/","text":"Workflow Creator \u00b6 v2.9 and after If you create your workflow via the CLI or UI, an attempt will be made to label it with the user who created it apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : my-wf labels : workflows.argoproj.io/creator : admin # labels must be DNS formatted, so the \"@\" is replaces by '.at.' workflows.argoproj.io/creator-email : admin.at.your.org workflows.argoproj.io/creator-preferred-username : admin-preferred-username Note Labels only contain [-_.0-9a-zA-Z] , so any other characters will be turned into - .","title":"Workflow Creator"},{"location":"workflow-creator/#workflow-creator","text":"v2.9 and after If you create your workflow via the CLI or UI, an attempt will be made to label it with the user who created it apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : name : my-wf labels : workflows.argoproj.io/creator : admin # labels must be DNS formatted, so the \"@\" is replaces by '.at.' workflows.argoproj.io/creator-email : admin.at.your.org workflows.argoproj.io/creator-preferred-username : admin-preferred-username Note Labels only contain [-_.0-9a-zA-Z] , so any other characters will be turned into - .","title":"Workflow Creator"},{"location":"workflow-events/","text":"Workflow Events \u00b6 v2.7.2 \u26a0\ufe0f Do not use Kubernetes events for automation. Events maybe lost or rolled-up. We emit Kubernetes events on certain events. Workflow state change: WorkflowRunning WorkflowSucceeded WorkflowFailed WorkflowTimedOut Node state change: WorkflowNodeRunning WorkflowNodeSucceeded WorkflowNodeFailed WorkflowNodeError The involved object is the workflow in both cases. Additionally, for node state change events, annotations indicate the name and type of the involved node: metadata : name : my-wf.160434cb3af841f8 namespace : my-ns annotations : workflows.argoproj.io/node-name : my-node workflows.argoproj.io/node-type : Pod type : Normal reason : WorkflowNodeSucceeded message : 'Succeeded node my-node: my message' involvedObject : apiVersion : v1alpha1 kind : Workflow name : my-wf namespace : my-ns resourceVersion : \"1234\" uid : my-uid firstTimestamp : \"2020-04-09T16:50:16Z\" lastTimestamp : \"2020-04-09T16:50:16Z\" count : 1","title":"Workflow Events"},{"location":"workflow-events/#workflow-events","text":"v2.7.2 \u26a0\ufe0f Do not use Kubernetes events for automation. Events maybe lost or rolled-up. We emit Kubernetes events on certain events. Workflow state change: WorkflowRunning WorkflowSucceeded WorkflowFailed WorkflowTimedOut Node state change: WorkflowNodeRunning WorkflowNodeSucceeded WorkflowNodeFailed WorkflowNodeError The involved object is the workflow in both cases. Additionally, for node state change events, annotations indicate the name and type of the involved node: metadata : name : my-wf.160434cb3af841f8 namespace : my-ns annotations : workflows.argoproj.io/node-name : my-node workflows.argoproj.io/node-type : Pod type : Normal reason : WorkflowNodeSucceeded message : 'Succeeded node my-node: my message' involvedObject : apiVersion : v1alpha1 kind : Workflow name : my-wf namespace : my-ns resourceVersion : \"1234\" uid : my-uid firstTimestamp : \"2020-04-09T16:50:16Z\" lastTimestamp : \"2020-04-09T16:50:16Z\" count : 1","title":"Workflow Events"},{"location":"workflow-executors/","text":"Workflow Executors \u00b6 A workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container life-cycles, etc. The executor to be used in your workflows can be changed in the config map under the containerRuntimeExecutor key (removed in v3.4). Emissary (emissary) \u00b6 v3.1 and after Default in >= v3.3. This is the most fully featured executor. Reliability: Works on GKE Autopilot Does not require init process to kill sub-processes. More secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot . Scalable: It reads and writes to and from the container's disk and typically does not use any network APIs unless resource type template is used. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: command should be specified for containers. You can determine values as follows: docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' argoproj/argosay:v2 Learn more about command and args Image Index/Cache \u00b6 If you don't provide command to run, the emissary will grab it from container image. You can also specify it using the workflow spec or emissary will look it up in the image index . This is nothing more fancy than a configuration item . Emissary will create a cache entry, using image with version as key and command as value, and it will reuse it for specific image/version. Exit Code 64 \u00b6 The emissary will exit with code 64 if it fails. This may indicate a bug in the emissary. Docker (docker) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. Default in <= v3.2. Least secure: It requires privileged access to docker.sock of the host to be mounted which. Often rejected by Open Policy Agent (OPA) or your Pod Security Policy (PSP). It can escape the privileges of the pod's service account It cannot runAsNonRoot . Equal most scalable: It communicates directly with the local Docker daemon. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: No additional configuration needed. Note : when using docker as workflow executors, messages printed in both stdout and stderr are captured in the Argo variable .outputs.result . Kubelet (kubelet) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. Secure No privileged access Cannot escape the privileges of the pod's service account runAsNonRoot - TBD, see #4186 Scalable: Operations performed against the local Kubelet Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: Additional Kubelet configuration maybe needed Kubernetes API ( k8sapi ) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. Reliability: Works on GKE Autopilot Most secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot Least scalable: Log retrieval and container operations performed against the remote Kubernetes API Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: No additional configuration needed. Process Namespace Sharing ( pns ) \u00b6 \u26a0\ufe0fDeprecated. Removed in v3.4. More secure: No privileged access cannot escape the privileges of the pod's service account Can runAsNonRoot , if you use volumes (e.g. empty-dir ) for your output artifacts Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions. Scalable: Most operations use local procfs . Log retrieval uses the remote Kubernetes API Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ) Cannot capture artifacts from a base layer which has a volume mounted under it Cannot capture artifacts from base layer if the container is short-lived. Configuration: No additional configuration needed. Process will no longer run with PID 1 Doesn't work for Windows containers . Learn more","title":"Workflow Executors"},{"location":"workflow-executors/#workflow-executors","text":"A workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container life-cycles, etc. The executor to be used in your workflows can be changed in the config map under the containerRuntimeExecutor key (removed in v3.4).","title":"Workflow Executors"},{"location":"workflow-executors/#emissary-emissary","text":"v3.1 and after Default in >= v3.3. This is the most fully featured executor. Reliability: Works on GKE Autopilot Does not require init process to kill sub-processes. More secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot . Scalable: It reads and writes to and from the container's disk and typically does not use any network APIs unless resource type template is used. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: command should be specified for containers. You can determine values as follows: docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' argoproj/argosay:v2 Learn more about command and args","title":"Emissary (emissary)"},{"location":"workflow-executors/#image-indexcache","text":"If you don't provide command to run, the emissary will grab it from container image. You can also specify it using the workflow spec or emissary will look it up in the image index . This is nothing more fancy than a configuration item . Emissary will create a cache entry, using image with version as key and command as value, and it will reuse it for specific image/version.","title":"Image Index/Cache"},{"location":"workflow-executors/#exit-code-64","text":"The emissary will exit with code 64 if it fails. This may indicate a bug in the emissary.","title":"Exit Code 64"},{"location":"workflow-executors/#docker-docker","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. Default in <= v3.2. Least secure: It requires privileged access to docker.sock of the host to be mounted which. Often rejected by Open Policy Agent (OPA) or your Pod Security Policy (PSP). It can escape the privileges of the pod's service account It cannot runAsNonRoot . Equal most scalable: It communicates directly with the local Docker daemon. Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ). Configuration: No additional configuration needed. Note : when using docker as workflow executors, messages printed in both stdout and stderr are captured in the Argo variable .outputs.result .","title":"Docker (docker)"},{"location":"workflow-executors/#kubelet-kubelet","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. Secure No privileged access Cannot escape the privileges of the pod's service account runAsNonRoot - TBD, see #4186 Scalable: Operations performed against the local Kubelet Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: Additional Kubelet configuration maybe needed","title":"Kubelet (kubelet)"},{"location":"workflow-executors/#kubernetes-api-k8sapi","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. Reliability: Works on GKE Autopilot Most secure: No privileged access Cannot escape the privileges of the pod's service account Can runAsNonRoot Least scalable: Log retrieval and container operations performed against the remote Kubernetes API Artifacts: Output artifacts must be saved on volumes (e.g. empty-dir ) and not the base image layer (e.g. /tmp ) Step/Task result: Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result . May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result Configuration: No additional configuration needed.","title":"Kubernetes API (k8sapi)"},{"location":"workflow-executors/#process-namespace-sharing-pns","text":"\u26a0\ufe0fDeprecated. Removed in v3.4. More secure: No privileged access cannot escape the privileges of the pod's service account Can runAsNonRoot , if you use volumes (e.g. empty-dir ) for your output artifacts Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions. Scalable: Most operations use local procfs . Log retrieval uses the remote Kubernetes API Artifacts: Output artifacts can be located on the base layer (e.g. /tmp ) Cannot capture artifacts from a base layer which has a volume mounted under it Cannot capture artifacts from base layer if the container is short-lived. Configuration: No additional configuration needed. Process will no longer run with PID 1 Doesn't work for Windows containers . Learn more","title":"Process Namespace Sharing (pns)"},{"location":"workflow-inputs/","text":"Workflow Inputs \u00b6 Introduction \u00b6 Workflows and template s operate on a set of defined parameters and arguments that are supplied to the running container. The precise details of how to manage the inputs can be confusing; this article attempts to clarify concepts and provide simple working examples to illustrate the various configuration options. The examples below are limited to DAGTemplate s and mainly focused on parameters , but similar reasoning applies to the other types of template s. Parameter Inputs \u00b6 First, some clarification of terms is needed. For a glossary reference, see Argo Core Concepts . A workflow provides arguments , which are passed in to the entry point template. A template defines inputs which are then provided by template callers (such as steps , dag , or even a workflow ). The structure of both is identical. For example, in a Workflow , one parameter would look like this: arguments : parameters : - name : workflow-param-1 And in a template : inputs : parameters : - name : template-param-1 Inputs to DAGTemplate s use the arguments format: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : abcd Previous examples in context: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : example- spec : entrypoint : main arguments : parameters : - name : workflow-param-1 templates : - name : main dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-template-a inputs : parameters : - name : template-param-1 script : image : alpine command : [ /bin/sh ] source : | echo \"{{inputs.parameters.template-param-1}}\" To run this example: argo submit -n argo example.yaml -p 'workflow-param-1=\"abcd\"' --watch Using Previous Step Outputs As Inputs \u00b6 In DAGTemplate s, it is common to want to take the output of one step and send it as the input to another step. However, there is a difference in how this works for artifacts vs parameters. Suppose our step-template-a defines some outputs: outputs : parameters : - name : output-param-1 valueFrom : path : /p1.txt artifacts : - name : output-artifact-1 path : /some-directory In my DAGTemplate , I can send these outputs to another template like this: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-B dependencies : [ step-A ] template : step-template-b arguments : parameters : - name : template-param-2 value : \"{{tasks.step-A.outputs.parameters.output-param-1}}\" artifacts : - name : input-artifact-1 from : \"{{tasks.step-A.outputs.artifacts.output-artifact-1}}\" Note the important distinction between parameters and artifacts ; they both share the name field, but one uses value and the other uses from .","title":"Workflow Inputs"},{"location":"workflow-inputs/#workflow-inputs","text":"","title":"Workflow Inputs"},{"location":"workflow-inputs/#introduction","text":"Workflows and template s operate on a set of defined parameters and arguments that are supplied to the running container. The precise details of how to manage the inputs can be confusing; this article attempts to clarify concepts and provide simple working examples to illustrate the various configuration options. The examples below are limited to DAGTemplate s and mainly focused on parameters , but similar reasoning applies to the other types of template s.","title":"Introduction"},{"location":"workflow-inputs/#parameter-inputs","text":"First, some clarification of terms is needed. For a glossary reference, see Argo Core Concepts . A workflow provides arguments , which are passed in to the entry point template. A template defines inputs which are then provided by template callers (such as steps , dag , or even a workflow ). The structure of both is identical. For example, in a Workflow , one parameter would look like this: arguments : parameters : - name : workflow-param-1 And in a template : inputs : parameters : - name : template-param-1 Inputs to DAGTemplate s use the arguments format: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : abcd Previous examples in context: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : example- spec : entrypoint : main arguments : parameters : - name : workflow-param-1 templates : - name : main dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-template-a inputs : parameters : - name : template-param-1 script : image : alpine command : [ /bin/sh ] source : | echo \"{{inputs.parameters.template-param-1}}\" To run this example: argo submit -n argo example.yaml -p 'workflow-param-1=\"abcd\"' --watch","title":"Parameter Inputs"},{"location":"workflow-inputs/#using-previous-step-outputs-as-inputs","text":"In DAGTemplate s, it is common to want to take the output of one step and send it as the input to another step. However, there is a difference in how this works for artifacts vs parameters. Suppose our step-template-a defines some outputs: outputs : parameters : - name : output-param-1 valueFrom : path : /p1.txt artifacts : - name : output-artifact-1 path : /some-directory In my DAGTemplate , I can send these outputs to another template like this: dag : tasks : - name : step-A template : step-template-a arguments : parameters : - name : template-param-1 value : \"{{workflow.parameters.workflow-param-1}}\" - name : step-B dependencies : [ step-A ] template : step-template-b arguments : parameters : - name : template-param-2 value : \"{{tasks.step-A.outputs.parameters.output-param-1}}\" artifacts : - name : input-artifact-1 from : \"{{tasks.step-A.outputs.artifacts.output-artifact-1}}\" Note the important distinction between parameters and artifacts ; they both share the name field, but one uses value and the other uses from .","title":"Using Previous Step Outputs As Inputs"},{"location":"workflow-notifications/","text":"Workflow Notifications \u00b6 There are a number of use cases where you may wish to notify an external system when a workflow completes: Send an email. Send a Slack (or other instant message). Send a message to Kafka (or other message bus). You have options: For individual workflows, can add an exit handler to your workflow, such as in this example . If you want the same for every workflow, you can add an exit handler to the default workflow spec . Use a service (e.g. Heptio Labs EventRouter ) to the Workflow events we emit.","title":"Workflow Notifications"},{"location":"workflow-notifications/#workflow-notifications","text":"There are a number of use cases where you may wish to notify an external system when a workflow completes: Send an email. Send a Slack (or other instant message). Send a message to Kafka (or other message bus). You have options: For individual workflows, can add an exit handler to your workflow, such as in this example . If you want the same for every workflow, you can add an exit handler to the default workflow spec . Use a service (e.g. Heptio Labs EventRouter ) to the Workflow events we emit.","title":"Workflow Notifications"},{"location":"workflow-of-workflows/","text":"Workflow of Workflows \u00b6 v2.9 and after Introduction \u00b6 The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting on their results. Examples \u00b6 You can use workflowTemplateRef to trigger a workflow inline. Define your workflow as a workflowtemplate . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Create the Workflowtemplate in cluster using argo template create Define the workflow of workflows. # This template demonstrates a workflow of workflows. # Workflow triggers one or more workflows and manages them. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-of-workflows- spec : entrypoint : main templates : - name : main steps : - - name : workflow1 template : resource-without-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - - name : workflow2 template : resource-with-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - name : message value : \"Welcome Argo\" - name : resource-without-argument inputs : parameters : - name : workflowtemplate resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-1- spec: workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error) - name : resource-with-argument inputs : parameters : - name : workflowtemplate - name : message resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-2- spec: arguments: parameters: - name: message value: {{inputs.parameters.message}} workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error)","title":"Workflow of Workflows"},{"location":"workflow-of-workflows/#workflow-of-workflows","text":"v2.9 and after","title":"Workflow of Workflows"},{"location":"workflow-of-workflows/#introduction","text":"The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting on their results.","title":"Introduction"},{"location":"workflow-of-workflows/#examples","text":"You can use workflowTemplateRef to trigger a workflow inline. Define your workflow as a workflowtemplate . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Create the Workflowtemplate in cluster using argo template create Define the workflow of workflows. # This template demonstrates a workflow of workflows. # Workflow triggers one or more workflows and manages them. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-of-workflows- spec : entrypoint : main templates : - name : main steps : - - name : workflow1 template : resource-without-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - - name : workflow2 template : resource-with-argument arguments : parameters : - name : workflowtemplate value : \"workflow-template-submittable\" - name : message value : \"Welcome Argo\" - name : resource-without-argument inputs : parameters : - name : workflowtemplate resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-1- spec: workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error) - name : resource-with-argument inputs : parameters : - name : workflowtemplate - name : message resource : action : create manifest : | apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-of-workflows-2- spec: arguments: parameters: - name: message value: {{inputs.parameters.message}} workflowTemplateRef: name: {{inputs.parameters.workflowtemplate}} successCondition : status.phase == Succeeded failureCondition : status.phase in (Failed, Error)","title":"Examples"},{"location":"workflow-pod-security-context/","text":"Workflow Pod Security Context \u00b6 By default, all workflow pods run as root. The Docker executor even requires privileged: true . For other workflow executors , you can run your workflow pods more securely by configuring the security context for your workflow pod. This is likely to be necessary if you have a pod security policy . You probably can't use the Docker executor if you have a pod security policy. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : security-context- spec : securityContext : runAsNonRoot : true runAsUser : 8737 #; any non-root user You can configure this globally using workflow defaults . It is easy to make a workflow need root unintentionally You may find that user's workflows have been written to require root with seemingly innocuous code. E.g. mkdir /my-dir would require root. You must use volumes for output artifacts If you use runAsNonRoot - you cannot have output artifacts on base layer (e.g. /tmp ). You must use a volume (e.g. empty dir ).","title":"Workflow Pod Security Context"},{"location":"workflow-pod-security-context/#workflow-pod-security-context","text":"By default, all workflow pods run as root. The Docker executor even requires privileged: true . For other workflow executors , you can run your workflow pods more securely by configuring the security context for your workflow pod. This is likely to be necessary if you have a pod security policy . You probably can't use the Docker executor if you have a pod security policy. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : security-context- spec : securityContext : runAsNonRoot : true runAsUser : 8737 #; any non-root user You can configure this globally using workflow defaults . It is easy to make a workflow need root unintentionally You may find that user's workflows have been written to require root with seemingly innocuous code. E.g. mkdir /my-dir would require root. You must use volumes for output artifacts If you use runAsNonRoot - you cannot have output artifacts on base layer (e.g. /tmp ). You must use a volume (e.g. empty dir ).","title":"Workflow Pod Security Context"},{"location":"workflow-rbac/","text":"Workflow RBAC \u00b6 All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName , or if omitted, the default service account of the workflow's namespace. The amount of access which a workflow needs is dependent on what the workflow needs to do. For example, if your workflow needs to deploy a resource, then the workflow's service account will require 'create' privileges on that resource. Warning : We do not recommend using the default service account in production. It is a shared account so may have permissions added to it you do not want. Instead, create a service account only for your workflow. The minimum for the executor to function: For >= v3.4: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - argoproj.io resources : - workflowtaskresults verbs : - create - patch For <= v3.3 use. apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - \"\" resources : - pods verbs : - get - patch Warning: For many organizations, it may not be acceptable to give a workflow the pod patch permission, see #3961 If you are not using the emissary, you'll need additional permissions. See executor for suitable permissions.","title":"Workflow RBAC"},{"location":"workflow-rbac/#workflow-rbac","text":"All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName , or if omitted, the default service account of the workflow's namespace. The amount of access which a workflow needs is dependent on what the workflow needs to do. For example, if your workflow needs to deploy a resource, then the workflow's service account will require 'create' privileges on that resource. Warning : We do not recommend using the default service account in production. It is a shared account so may have permissions added to it you do not want. Instead, create a service account only for your workflow. The minimum for the executor to function: For >= v3.4: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - argoproj.io resources : - workflowtaskresults verbs : - create - patch For <= v3.3 use. apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : name : executor rules : - apiGroups : - \"\" resources : - pods verbs : - get - patch Warning: For many organizations, it may not be acceptable to give a workflow the pod patch permission, see #3961 If you are not using the emissary, you'll need additional permissions. See executor for suitable permissions.","title":"Workflow RBAC"},{"location":"workflow-restrictions/","text":"Workflow Restrictions \u00b6 v2.9 and after Introduction \u00b6 As the administrator of the controller, you may want to limit which types of Workflows your users can run. Workflow Restrictions allow you to set requirements for all Workflows. Available Restrictions \u00b6 templateReferencing: Strict : Only process Workflows using workflowTemplateRef . You can use this to require usage of WorkflowTemplates, disallowing arbitrary Workflow execution. templateReferencing: Secure : Same as Strict plus enforce that a referenced WorkflowTemplate hasn't changed between operations. If a running Workflow's underlying WorkflowTemplate changes, the Workflow will error out. Setting Workflow Restrictions \u00b6 You can add workflowRestrictions in the workflow-controller-configmap . For example, to specify that Workflows may only run with workflowTemplateRef : # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : workflowRestrictions : | templateReferencing: Strict","title":"Workflow Restrictions"},{"location":"workflow-restrictions/#workflow-restrictions","text":"v2.9 and after","title":"Workflow Restrictions"},{"location":"workflow-restrictions/#introduction","text":"As the administrator of the controller, you may want to limit which types of Workflows your users can run. Workflow Restrictions allow you to set requirements for all Workflows.","title":"Introduction"},{"location":"workflow-restrictions/#available-restrictions","text":"templateReferencing: Strict : Only process Workflows using workflowTemplateRef . You can use this to require usage of WorkflowTemplates, disallowing arbitrary Workflow execution. templateReferencing: Secure : Same as Strict plus enforce that a referenced WorkflowTemplate hasn't changed between operations. If a running Workflow's underlying WorkflowTemplate changes, the Workflow will error out.","title":"Available Restrictions"},{"location":"workflow-restrictions/#setting-workflow-restrictions","text":"You can add workflowRestrictions in the workflow-controller-configmap . For example, to specify that Workflows may only run with workflowTemplateRef : # This file describes the config settings available in the workflow controller configmap apiVersion : v1 kind : ConfigMap metadata : name : workflow-controller-configmap data : workflowRestrictions : | templateReferencing: Strict","title":"Setting Workflow Restrictions"},{"location":"workflow-submitting-workflow/","text":"One Workflow Submitting Another \u00b6 v2.8 and after If you want one workflow to create another, you can do this using curl . You'll need an access token . Typically the best way is to submit from a workflow template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : demo- spec : entrypoint : main templates : - name : main steps : - - name : a template : create-wf - name : create-wf script : image : curlimages/curl:latest command : - sh source : > curl https://argo-server:2746/api/v1/workflows/argo/submit \\ -fs \\ -H \"Authorization: Bearer eyJhbGci...\" \\ -d '{\"resourceKind\": \"WorkflowTemplate\", \"resourceName\": \"wait\", \"submitOptions\": {\"labels\": \"workflows.argoproj.io/workflow-template=wait\"}}'","title":"One Workflow Submitting Another"},{"location":"workflow-submitting-workflow/#one-workflow-submitting-another","text":"v2.8 and after If you want one workflow to create another, you can do this using curl . You'll need an access token . Typically the best way is to submit from a workflow template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : demo- spec : entrypoint : main templates : - name : main steps : - - name : a template : create-wf - name : create-wf script : image : curlimages/curl:latest command : - sh source : > curl https://argo-server:2746/api/v1/workflows/argo/submit \\ -fs \\ -H \"Authorization: Bearer eyJhbGci...\" \\ -d '{\"resourceKind\": \"WorkflowTemplate\", \"resourceName\": \"wait\", \"submitOptions\": {\"labels\": \"workflows.argoproj.io/workflow-template=wait\"}}'","title":"One Workflow Submitting Another"},{"location":"workflow-templates/","text":"Workflow Templates \u00b6 v2.4 and after Introduction \u00b6 WorkflowTemplates are definitions of Workflows that live in your cluster. This allows you to create a library of frequently-used templates and reuse them either by submitting them directly (v2.7 and after) or by referencing them from your Workflows . WorkflowTemplate vs template \u00b6 The terms WorkflowTemplate and template have created an unfortunate naming collision and have created some confusion in the past. However, a quick description should clarify each and their differences. A template (lower-case) is a task within a Workflow or (confusingly) a WorkflowTemplate under the field templates . Whenever you define a Workflow , you must define at least one (but usually more than one) template to run. This template can be of type container , script , dag , steps , resource , or suspend and can be referenced by an entrypoint or by other dag , and step templates. Here is an example of a Workflow with two templates : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello # We reference our first \"template\" here templates : - name : hello # The first \"template\" in this Workflow, it is referenced by \"entrypoint\" steps : # The type of this \"template\" is \"steps\" - - name : hello template : whalesay # We reference our second \"template\" here arguments : parameters : [{ name : message , value : \"hello1\" }] - name : whalesay # The second \"template\" in this Workflow, it is referenced by \"hello\" inputs : parameters : - name : message container : # The type of this \"template\" is \"container\" image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] A WorkflowTemplate is a definition of a Workflow that lives in your cluster. Since it is a definition of a Workflow it also contains templates . These templates can be referenced from within the WorkflowTemplate and from other Workflows and WorkflowTemplates on your cluster. To see how, please see Referencing Other WorkflowTemplates . WorkflowTemplate Spec \u00b6 v2.7 and after In v2.7 and after, all the fields in WorkflowSpec (except for priority that must be configured in a WorkflowSpec itself) are supported for WorkflowTemplates . You can take any existing Workflow you may have and convert it to a WorkflowTemplate by substituting kind: Workflow to kind: WorkflowTemplate . v2.4 \u2013 2.6 WorkflowTemplates in v2.4 - v2.6 are only partial Workflow definitions and only support the templates and arguments field. This would not be a valid WorkflowTemplate in v2.4 - v2.6 (notice entrypoint field): apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template # Fields other than \"arguments\" and \"templates\" not supported in v2.4 - v2.6 arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] However, this would be a valid WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] Adding labels/annotations to Workflows with workflowMetadata \u00b6 2.10.2 and after To automatically add labels and/or annotations to Workflows created from WorkflowTemplates , use workflowMetadata . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : workflowMetadata : labels : example-label : example-value Working with parameters \u00b6 When working with parameters in a WorkflowTemplate , please note the following: When working with global parameters, you can instantiate your global variables in your Workflow and then directly reference them in your WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-global-arg spec : serviceAccountName : argo templates : - name : hello-world container : image : docker/whalesay command : [ cowsay ] args : [ \"{{workflow.parameters.global-parameter}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-wf-global-arg- spec : serviceAccountName : argo entrypoint : whalesay arguments : parameters : - name : global-parameter value : hello templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-global-arg template : hello-world When working with local parameters, the values of local parameters must be supplied at the template definition inside the WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-local-arg spec : templates : - name : hello-world inputs : parameters : - name : msg value : \"hello world\" container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.msg}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-local-arg- spec : entrypoint : whalesay templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-local-arg template : hello-world Referencing other WorkflowTemplates \u00b6 You can reference templates from another WorkflowTemplates (see the difference between the two ) using a templateRef field. Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example from a steps template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate\" using this field name : workflow-template-1 # This is the name of the \"WorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" You can also do so similarly with a dag template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay dag : tasks : - name : call-whalesay-template templateRef : name : workflow-template-1 template : whalesay-template arguments : parameters : - name : message value : \"hello world\" You should never reference another template directly on a template object (outside of a steps or dag template). This includes both using template and templateRef . This behavior is deprecated, no longer supported, and will be removed in a future version. Here is an example of a deprecated reference that should not be used : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay template : # You should NEVER use \"template\" here. Use it under a \"steps\" or \"dag\" template (see above). templateRef : # You should NEVER use \"templateRef\" here. Use it under a \"steps\" or \"dag\" template (see above). name : workflow-template-1 template : whalesay-template arguments : # Arguments here are ignored. Use them under a \"steps\" or \"dag\" template (see above). parameters : - name : message value : \"hello world\" The reasoning for deprecating this behavior is that a template is a \"definition\": it defines inputs and things to be done once instantiated. With this deprecated behavior, the same template object is allowed to be an \"instantiator\": to pass in \"live\" arguments and reference other templates (those other templates may be \"definitions\" or \"instantiators\"). This behavior has been problematic and dangerous. It causes confusion and has design inconsistencies. 2.9 and after Create Workflow from WorkflowTemplate Spec \u00b6 You can create Workflow from WorkflowTemplate spec using workflowTemplateRef . If you pass the arguments to created Workflow , it will be merged with workflow template arguments. Here is an example for referring WorkflowTemplate as Workflow with passing entrypoint and Workflow Arguments to WorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : workflow-template-submittable Here is an example of a referring WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : workflowTemplateRef : name : workflow-template-submittable Managing WorkflowTemplates \u00b6 CLI \u00b6 You can create some example templates as follows: argo template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/templates.yaml Then submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/hello-world.yaml 2.7 and after Then submit a WorkflowTemplate as a Workflow : argo submit --from workflowtemplate/workflow-template-submittable If you need to submit a WorkflowTemplate as a Workflow with parameters: argo submit --from workflowtemplate/workflow-template-submittable -p message = value1 kubectl \u00b6 Using kubectl apply -f and kubectl get wftmpl GitOps via Argo CD \u00b6 WorkflowTemplate resources can be managed with GitOps by using Argo CD UI \u00b6 WorkflowTemplate resources can also be managed by the UI Users can specify options under enum to enable drop-down list selection when submitting WorkflowTemplate s from the UI. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-with-enum-values spec : entrypoint : argosay arguments : parameters : - name : message value : one enum : - one - two - three templates : - name : argosay inputs : parameters : - name : message value : '{{workflow.parameters.message}}' container : name : main image : 'argoproj/argosay:v2' command : - /argosay args : - echo - '{{inputs.parameters.message}}'","title":"Workflow Templates"},{"location":"workflow-templates/#workflow-templates","text":"v2.4 and after","title":"Workflow Templates"},{"location":"workflow-templates/#introduction","text":"WorkflowTemplates are definitions of Workflows that live in your cluster. This allows you to create a library of frequently-used templates and reuse them either by submitting them directly (v2.7 and after) or by referencing them from your Workflows .","title":"Introduction"},{"location":"workflow-templates/#workflowtemplate-vs-template","text":"The terms WorkflowTemplate and template have created an unfortunate naming collision and have created some confusion in the past. However, a quick description should clarify each and their differences. A template (lower-case) is a task within a Workflow or (confusingly) a WorkflowTemplate under the field templates . Whenever you define a Workflow , you must define at least one (but usually more than one) template to run. This template can be of type container , script , dag , steps , resource , or suspend and can be referenced by an entrypoint or by other dag , and step templates. Here is an example of a Workflow with two templates : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello # We reference our first \"template\" here templates : - name : hello # The first \"template\" in this Workflow, it is referenced by \"entrypoint\" steps : # The type of this \"template\" is \"steps\" - - name : hello template : whalesay # We reference our second \"template\" here arguments : parameters : [{ name : message , value : \"hello1\" }] - name : whalesay # The second \"template\" in this Workflow, it is referenced by \"hello\" inputs : parameters : - name : message container : # The type of this \"template\" is \"container\" image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] A WorkflowTemplate is a definition of a Workflow that lives in your cluster. Since it is a definition of a Workflow it also contains templates . These templates can be referenced from within the WorkflowTemplate and from other Workflows and WorkflowTemplates on your cluster. To see how, please see Referencing Other WorkflowTemplates .","title":"WorkflowTemplate vs template"},{"location":"workflow-templates/#workflowtemplate-spec","text":"v2.7 and after In v2.7 and after, all the fields in WorkflowSpec (except for priority that must be configured in a WorkflowSpec itself) are supported for WorkflowTemplates . You can take any existing Workflow you may have and convert it to a WorkflowTemplate by substituting kind: Workflow to kind: WorkflowTemplate . v2.4 \u2013 2.6 WorkflowTemplates in v2.4 - v2.6 are only partial Workflow definitions and only support the templates and arguments field. This would not be a valid WorkflowTemplate in v2.4 - v2.6 (notice entrypoint field): apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : entrypoint : whalesay-template # Fields other than \"arguments\" and \"templates\" not supported in v2.4 - v2.6 arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] However, this would be a valid WorkflowTemplate : apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : arguments : parameters : - name : message value : hello world templates : - name : whalesay-template inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ]","title":"WorkflowTemplate Spec"},{"location":"workflow-templates/#adding-labelsannotations-to-workflows-with-workflowmetadata","text":"2.10.2 and after To automatically add labels and/or annotations to Workflows created from WorkflowTemplates , use workflowMetadata . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-submittable spec : workflowMetadata : labels : example-label : example-value","title":"Adding labels/annotations to Workflows with workflowMetadata"},{"location":"workflow-templates/#working-with-parameters","text":"When working with parameters in a WorkflowTemplate , please note the following: When working with global parameters, you can instantiate your global variables in your Workflow and then directly reference them in your WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-global-arg spec : serviceAccountName : argo templates : - name : hello-world container : image : docker/whalesay command : [ cowsay ] args : [ \"{{workflow.parameters.global-parameter}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-wf-global-arg- spec : serviceAccountName : argo entrypoint : whalesay arguments : parameters : - name : global-parameter value : hello templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-global-arg template : hello-world When working with local parameters, the values of local parameters must be supplied at the template definition inside the WorkflowTemplate . Below is a working example: apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : hello-world-template-local-arg spec : templates : - name : hello-world inputs : parameters : - name : msg value : \"hello world\" container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.msg}}\" ] --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-local-arg- spec : entrypoint : whalesay templates : - name : whalesay steps : - - name : hello-world templateRef : name : hello-world-template-local-arg template : hello-world","title":"Working with parameters"},{"location":"workflow-templates/#referencing-other-workflowtemplates","text":"You can reference templates from another WorkflowTemplates (see the difference between the two ) using a templateRef field. Just as how you reference other templates within the same Workflow , you should do so from a steps or dag template. Here is an example from a steps template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay steps : # You should only reference external \"templates\" in a \"steps\" or \"dag\" \"template\". - - name : call-whalesay-template templateRef : # You can reference a \"template\" from another \"WorkflowTemplate\" using this field name : workflow-template-1 # This is the name of the \"WorkflowTemplate\" CRD that contains the \"template\" you want template : whalesay-template # This is the name of the \"template\" you want to reference arguments : # You can pass in arguments as normal parameters : - name : message value : \"hello world\" You can also do so similarly with a dag template: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay dag : tasks : - name : call-whalesay-template templateRef : name : workflow-template-1 template : whalesay-template arguments : parameters : - name : message value : \"hello world\" You should never reference another template directly on a template object (outside of a steps or dag template). This includes both using template and templateRef . This behavior is deprecated, no longer supported, and will be removed in a future version. Here is an example of a deprecated reference that should not be used : apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay templates : - name : whalesay template : # You should NEVER use \"template\" here. Use it under a \"steps\" or \"dag\" template (see above). templateRef : # You should NEVER use \"templateRef\" here. Use it under a \"steps\" or \"dag\" template (see above). name : workflow-template-1 template : whalesay-template arguments : # Arguments here are ignored. Use them under a \"steps\" or \"dag\" template (see above). parameters : - name : message value : \"hello world\" The reasoning for deprecating this behavior is that a template is a \"definition\": it defines inputs and things to be done once instantiated. With this deprecated behavior, the same template object is allowed to be an \"instantiator\": to pass in \"live\" arguments and reference other templates (those other templates may be \"definitions\" or \"instantiators\"). This behavior has been problematic and dangerous. It causes confusion and has design inconsistencies. 2.9 and after","title":"Referencing other WorkflowTemplates"},{"location":"workflow-templates/#create-workflow-from-workflowtemplate-spec","text":"You can create Workflow from WorkflowTemplate spec using workflowTemplateRef . If you pass the arguments to created Workflow , it will be merged with workflow template arguments. Here is an example for referring WorkflowTemplate as Workflow with passing entrypoint and Workflow Arguments to WorkflowTemplate apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : entrypoint : whalesay-template arguments : parameters : - name : message value : \"from workflow\" workflowTemplateRef : name : workflow-template-submittable Here is an example of a referring WorkflowTemplate as Workflow and using WorkflowTemplates 's entrypoint and Workflow Arguments apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : workflow-template-hello-world- spec : workflowTemplateRef : name : workflow-template-submittable","title":"Create Workflow from WorkflowTemplate Spec"},{"location":"workflow-templates/#managing-workflowtemplates","text":"","title":"Managing WorkflowTemplates"},{"location":"workflow-templates/#cli","text":"You can create some example templates as follows: argo template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/templates.yaml Then submit a workflow using one of those templates: argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/hello-world.yaml 2.7 and after Then submit a WorkflowTemplate as a Workflow : argo submit --from workflowtemplate/workflow-template-submittable If you need to submit a WorkflowTemplate as a Workflow with parameters: argo submit --from workflowtemplate/workflow-template-submittable -p message = value1","title":"CLI"},{"location":"workflow-templates/#kubectl","text":"Using kubectl apply -f and kubectl get wftmpl","title":"kubectl"},{"location":"workflow-templates/#gitops-via-argo-cd","text":"WorkflowTemplate resources can be managed with GitOps by using Argo CD","title":"GitOps via Argo CD"},{"location":"workflow-templates/#ui","text":"WorkflowTemplate resources can also be managed by the UI Users can specify options under enum to enable drop-down list selection when submitting WorkflowTemplate s from the UI. apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : workflow-template-with-enum-values spec : entrypoint : argosay arguments : parameters : - name : message value : one enum : - one - two - three templates : - name : argosay inputs : parameters : - name : message value : '{{workflow.parameters.message}}' container : name : main image : 'argoproj/argosay:v2' command : - /argosay args : - echo - '{{inputs.parameters.message}}'","title":"UI"},{"location":"cli/argo/","text":"argo \u00b6 argo is the command line interface to Argo Synopsis \u00b6 You can use the CLI in the following modes: Kubernetes API Mode (default) \u00b6 Requests are sent directly to the Kubernetes API. No Argo Server is needed. Large workflows and the workflow archive are not supported. Use when you have direct access to the Kubernetes API, and don't need large workflow or workflow archive support. If you're using instance ID (which is very unlikely), you'll need to set it: ARGO_INSTANCEID=your-instanceid Argo Server GRPC Mode \u00b6 Requests are sent to the Argo Server API via GRPC (using HTTP/2). Large workflows and the workflow archive are supported. Network load-balancers that do not support HTTP/2 are not supported. Use if you do not have access to the Kubernetes API (e.g. you're in another cluster), and you're running the Argo Server using a network load-balancer that support HTTP/2. To enable, set ARGO_SERVER: ARGO_SERVER = localhost : 2746 ; # The format is \"host:port\" - do not prefix with \"http\" or \"https\" If you're have transport-layer security (TLS) enabled (i.e. you are running \"argo server --secure\" and therefore has HTTPS): ARGO_SECURE=true If your server is running with self-signed certificates. Do not use in production: ARGO_INSECURE_SKIP_VERIFY=true By default, the CLI uses your KUBECONFIG to determine default for ARGO_TOKEN and ARGO_NAMESPACE. You probably error with \"no configuration has been provided\". To prevent it: KUBECONFIG=/dev/null You will then need to set: ARGO_NAMESPACE=argo And: ARGO_TOKEN='Bearer ******' ;# Should always start with \"Bearer \" or \"Basic \". Argo Server HTTP1 Mode \u00b6 As per GRPC mode, but uses HTTP. Can be used with ALB that does not support HTTP/2. The command \"argo logs --since-time=2020....\" will not work (due to time-type). Use this when your network load-balancer does not support HTTP/2. Use the same configuration as GRPC mode, but also set: ARGO_HTTP1=true If your server is behind an ingress with a path (you'll be running \"argo server --basehref /...) or \"BASE_HREF=/... argo server\"): ARGO_BASE_HREF=/argo argo [flags] Options \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. -h, --help help for argo --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive argo auth - manage authentication settings argo cluster-template - manipulate cluster workflow templates argo completion - output shell completion code for the specified shell (bash or zsh) argo cp - copy artifacts from workflow argo cron - manage cron workflows argo delete - delete workflows argo executor-plugin - manage executor plugins argo get - display details about a workflow argo lint - validate files or directories of manifests argo list - list workflows argo logs - view logs of a pod or workflow argo node - perform action on a node in a workflow argo resubmit - resubmit one or more workflows argo resume - resume zero or more workflows (opposite of suspend) argo retry - retry zero or more workflows argo server - start the Argo Server argo stop - stop zero or more workflows allowing all exit handlers to run argo submit - submit a workflow argo suspend - suspend zero or more workflows (opposite of resume) argo template - manipulate workflow templates argo terminate - terminate zero or more workflows immediately argo version - print version information argo wait - waits for workflows to complete argo watch - watch a workflow until it completes","title":"argo"},{"location":"cli/argo/#argo","text":"argo is the command line interface to Argo","title":"argo"},{"location":"cli/argo/#synopsis","text":"You can use the CLI in the following modes:","title":"Synopsis"},{"location":"cli/argo/#kubernetes-api-mode-default","text":"Requests are sent directly to the Kubernetes API. No Argo Server is needed. Large workflows and the workflow archive are not supported. Use when you have direct access to the Kubernetes API, and don't need large workflow or workflow archive support. If you're using instance ID (which is very unlikely), you'll need to set it: ARGO_INSTANCEID=your-instanceid","title":"Kubernetes API Mode (default)"},{"location":"cli/argo/#argo-server-grpc-mode","text":"Requests are sent to the Argo Server API via GRPC (using HTTP/2). Large workflows and the workflow archive are supported. Network load-balancers that do not support HTTP/2 are not supported. Use if you do not have access to the Kubernetes API (e.g. you're in another cluster), and you're running the Argo Server using a network load-balancer that support HTTP/2. To enable, set ARGO_SERVER: ARGO_SERVER = localhost : 2746 ; # The format is \"host:port\" - do not prefix with \"http\" or \"https\" If you're have transport-layer security (TLS) enabled (i.e. you are running \"argo server --secure\" and therefore has HTTPS): ARGO_SECURE=true If your server is running with self-signed certificates. Do not use in production: ARGO_INSECURE_SKIP_VERIFY=true By default, the CLI uses your KUBECONFIG to determine default for ARGO_TOKEN and ARGO_NAMESPACE. You probably error with \"no configuration has been provided\". To prevent it: KUBECONFIG=/dev/null You will then need to set: ARGO_NAMESPACE=argo And: ARGO_TOKEN='Bearer ******' ;# Should always start with \"Bearer \" or \"Basic \".","title":"Argo Server GRPC Mode"},{"location":"cli/argo/#argo-server-http1-mode","text":"As per GRPC mode, but uses HTTP. Can be used with ALB that does not support HTTP/2. The command \"argo logs --since-time=2020....\" will not work (due to time-type). Use this when your network load-balancer does not support HTTP/2. Use the same configuration as GRPC mode, but also set: ARGO_HTTP1=true If your server is behind an ingress with a path (you'll be running \"argo server --basehref /...) or \"BASE_HREF=/... argo server\"): ARGO_BASE_HREF=/argo argo [flags]","title":"Argo Server HTTP1 Mode"},{"location":"cli/argo/#options","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. -h, --help help for argo --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options"},{"location":"cli/argo/#see-also","text":"argo archive - manage the workflow archive argo auth - manage authentication settings argo cluster-template - manipulate cluster workflow templates argo completion - output shell completion code for the specified shell (bash or zsh) argo cp - copy artifacts from workflow argo cron - manage cron workflows argo delete - delete workflows argo executor-plugin - manage executor plugins argo get - display details about a workflow argo lint - validate files or directories of manifests argo list - list workflows argo logs - view logs of a pod or workflow argo node - perform action on a node in a workflow argo resubmit - resubmit one or more workflows argo resume - resume zero or more workflows (opposite of suspend) argo retry - retry zero or more workflows argo server - start the Argo Server argo stop - stop zero or more workflows allowing all exit handlers to run argo submit - submit a workflow argo suspend - suspend zero or more workflows (opposite of resume) argo template - manipulate workflow templates argo terminate - terminate zero or more workflows immediately argo version - print version information argo wait - waits for workflows to complete argo watch - watch a workflow until it completes","title":"SEE ALSO"},{"location":"cli/argo_archive/","text":"argo archive \u00b6 manage the workflow archive argo archive [flags] Options \u00b6 -h, --help help for archive Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo archive delete - delete a workflow in the archive argo archive get - get a workflow in the archive argo archive list - list workflows in the archive argo archive list-label-keys - list workflows label keys in the archive argo archive list-label-values - get workflow label values in the archive argo archive resubmit - resubmit one or more workflows argo archive retry - retry zero or more workflows","title":"argo archive"},{"location":"cli/argo_archive/#argo-archive","text":"manage the workflow archive argo archive [flags]","title":"argo archive"},{"location":"cli/argo_archive/#options","text":"-h, --help help for archive","title":"Options"},{"location":"cli/argo_archive/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive/#see-also","text":"argo - argo is the command line interface to Argo argo archive delete - delete a workflow in the archive argo archive get - get a workflow in the archive argo archive list - list workflows in the archive argo archive list-label-keys - list workflows label keys in the archive argo archive list-label-values - get workflow label values in the archive argo archive resubmit - resubmit one or more workflows argo archive retry - retry zero or more workflows","title":"SEE ALSO"},{"location":"cli/argo_archive_delete/","text":"argo archive delete \u00b6 delete a workflow in the archive argo archive delete UID... [flags] Options \u00b6 -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive delete"},{"location":"cli/argo_archive_delete/#argo-archive-delete","text":"delete a workflow in the archive argo archive delete UID... [flags]","title":"argo archive delete"},{"location":"cli/argo_archive_delete/#options","text":"-h, --help help for delete","title":"Options"},{"location":"cli/argo_archive_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_delete/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_get/","text":"argo archive get \u00b6 get a workflow in the archive argo archive get UID [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide (default \"wide\") Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive get"},{"location":"cli/argo_archive_get/#argo-archive-get","text":"get a workflow in the archive argo archive get UID [flags]","title":"argo archive get"},{"location":"cli/argo_archive_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide (default \"wide\")","title":"Options"},{"location":"cli/argo_archive_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_get/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_list-label-keys/","text":"argo archive list-label-keys \u00b6 list workflows label keys in the archive argo archive list-label-keys [flags] Options \u00b6 -h, --help help for list-label-keys Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive list-label-keys"},{"location":"cli/argo_archive_list-label-keys/#argo-archive-list-label-keys","text":"list workflows label keys in the archive argo archive list-label-keys [flags]","title":"argo archive list-label-keys"},{"location":"cli/argo_archive_list-label-keys/#options","text":"-h, --help help for list-label-keys","title":"Options"},{"location":"cli/argo_archive_list-label-keys/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_list-label-keys/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_list-label-values/","text":"argo archive list-label-values \u00b6 get workflow label values in the archive argo archive list-label-values [flags] Options \u00b6 -h, --help help for list-label-values -l, --selector string Selector (label query) to query on, allows 1 value (e.g. -l key1) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive list-label-values"},{"location":"cli/argo_archive_list-label-values/#argo-archive-list-label-values","text":"get workflow label values in the archive argo archive list-label-values [flags]","title":"argo archive list-label-values"},{"location":"cli/argo_archive_list-label-values/#options","text":"-h, --help help for list-label-values -l, --selector string Selector (label query) to query on, allows 1 value (e.g. -l key1)","title":"Options"},{"location":"cli/argo_archive_list-label-values/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_list-label-values/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_list/","text":"argo archive list \u00b6 list workflows in the archive argo archive list [flags] Options \u00b6 --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. -h, --help help for list -o, --output string Output format. One of: json|yaml|wide (default \"wide\") -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive list"},{"location":"cli/argo_archive_list/#argo-archive-list","text":"list workflows in the archive argo archive list [flags]","title":"argo archive list"},{"location":"cli/argo_archive_list/#options","text":"--chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. -h, --help help for list -o, --output string Output format. One of: json|yaml|wide (default \"wide\") -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)","title":"Options"},{"location":"cli/argo_archive_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_list/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_resubmit/","text":"argo archive resubmit \u00b6 resubmit one or more workflows argo archive resubmit [WORKFLOW...] [flags] Examples \u00b6 # Resubmit a workflow: argo archive resubmit uid # Resubmit multiple workflows: argo archive resubmit uid another-uid # Resubmit multiple workflows by label selector: argo archive resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo archive resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo archive resubmit --wait uid # Resubmit and watch until completion: argo archive resubmit --watch uid # Resubmit and tail logs until completion: argo archive resubmit --log uid Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive resubmit"},{"location":"cli/argo_archive_resubmit/#argo-archive-resubmit","text":"resubmit one or more workflows argo archive resubmit [WORKFLOW...] [flags]","title":"argo archive resubmit"},{"location":"cli/argo_archive_resubmit/#examples","text":"# Resubmit a workflow: argo archive resubmit uid # Resubmit multiple workflows: argo archive resubmit uid another-uid # Resubmit multiple workflows by label selector: argo archive resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo archive resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo archive resubmit --wait uid # Resubmit and watch until completion: argo archive resubmit --watch uid # Resubmit and tail logs until completion: argo archive resubmit --log uid","title":"Examples"},{"location":"cli/argo_archive_resubmit/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted","title":"Options"},{"location":"cli/argo_archive_resubmit/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_resubmit/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_archive_retry/","text":"argo archive retry \u00b6 retry zero or more workflows argo archive retry [WORKFLOW...] [flags] Examples \u00b6 # Retry a workflow: argo archive retry uid # Retry multiple workflows: argo archive retry uid another-uid # Retry multiple workflows by label selector: argo archive retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo archive retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo archive retry --wait uid # Retry and watch until completion: argo archive retry --watch uid # Retry and tail logs until completion: argo archive retry --log uid Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo archive - manage the workflow archive","title":"argo archive retry"},{"location":"cli/argo_archive_retry/#argo-archive-retry","text":"retry zero or more workflows argo archive retry [WORKFLOW...] [flags]","title":"argo archive retry"},{"location":"cli/argo_archive_retry/#examples","text":"# Retry a workflow: argo archive retry uid # Retry multiple workflows: argo archive retry uid another-uid # Retry multiple workflows by label selector: argo archive retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo archive retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo archive retry --wait uid # Retry and watch until completion: argo archive retry --watch uid # Retry and tail logs until completion: argo archive retry --log uid","title":"Examples"},{"location":"cli/argo_archive_retry/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried","title":"Options"},{"location":"cli/argo_archive_retry/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_archive_retry/#see-also","text":"argo archive - manage the workflow archive","title":"SEE ALSO"},{"location":"cli/argo_auth/","text":"argo auth \u00b6 manage authentication settings argo auth [flags] Options \u00b6 -h, --help help for auth Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo auth token - Print the auth token","title":"argo auth"},{"location":"cli/argo_auth/#argo-auth","text":"manage authentication settings argo auth [flags]","title":"argo auth"},{"location":"cli/argo_auth/#options","text":"-h, --help help for auth","title":"Options"},{"location":"cli/argo_auth/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_auth/#see-also","text":"argo - argo is the command line interface to Argo argo auth token - Print the auth token","title":"SEE ALSO"},{"location":"cli/argo_auth_token/","text":"argo auth token \u00b6 Print the auth token argo auth token [flags] Options \u00b6 -h, --help help for token Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo auth - manage authentication settings","title":"argo auth token"},{"location":"cli/argo_auth_token/#argo-auth-token","text":"Print the auth token argo auth token [flags]","title":"argo auth token"},{"location":"cli/argo_auth_token/#options","text":"-h, --help help for token","title":"Options"},{"location":"cli/argo_auth_token/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_auth_token/#see-also","text":"argo auth - manage authentication settings","title":"SEE ALSO"},{"location":"cli/argo_cluster-template/","text":"argo cluster-template \u00b6 manipulate cluster workflow templates argo cluster-template [flags] Options \u00b6 -h, --help help for cluster-template Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo cluster-template create - create a cluster workflow template argo cluster-template delete - delete a cluster workflow template argo cluster-template get - display details about a cluster workflow template argo cluster-template lint - validate files or directories of cluster workflow template manifests argo cluster-template list - list cluster workflow templates","title":"argo cluster-template"},{"location":"cli/argo_cluster-template/#argo-cluster-template","text":"manipulate cluster workflow templates argo cluster-template [flags]","title":"argo cluster-template"},{"location":"cli/argo_cluster-template/#options","text":"-h, --help help for cluster-template","title":"Options"},{"location":"cli/argo_cluster-template/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template/#see-also","text":"argo - argo is the command line interface to Argo argo cluster-template create - create a cluster workflow template argo cluster-template delete - delete a cluster workflow template argo cluster-template get - display details about a cluster workflow template argo cluster-template lint - validate files or directories of cluster workflow template manifests argo cluster-template list - list cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_create/","text":"argo cluster-template create \u00b6 create a cluster workflow template argo cluster-template create FILE1 FILE2... [flags] Examples \u00b6 # Create a Cluster Workflow Template: argo cluster-template create FILE1 # Create a Cluster Workflow Template and print it as YAML: argo cluster-template create FILE1 --output yaml # Create a Cluster Workflow Template with relaxed validation: argo cluster-template create FILE1 --strict false Options \u00b6 -h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template create"},{"location":"cli/argo_cluster-template_create/#argo-cluster-template-create","text":"create a cluster workflow template argo cluster-template create FILE1 FILE2... [flags]","title":"argo cluster-template create"},{"location":"cli/argo_cluster-template_create/#examples","text":"# Create a Cluster Workflow Template: argo cluster-template create FILE1 # Create a Cluster Workflow Template and print it as YAML: argo cluster-template create FILE1 --output yaml # Create a Cluster Workflow Template with relaxed validation: argo cluster-template create FILE1 --strict false","title":"Examples"},{"location":"cli/argo_cluster-template_create/#options","text":"-h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_cluster-template_create/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_create/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_delete/","text":"argo cluster-template delete \u00b6 delete a cluster workflow template argo cluster-template delete WORKFLOW_TEMPLATE [flags] Options \u00b6 --all Delete all cluster workflow templates -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template delete"},{"location":"cli/argo_cluster-template_delete/#argo-cluster-template-delete","text":"delete a cluster workflow template argo cluster-template delete WORKFLOW_TEMPLATE [flags]","title":"argo cluster-template delete"},{"location":"cli/argo_cluster-template_delete/#options","text":"--all Delete all cluster workflow templates -h, --help help for delete","title":"Options"},{"location":"cli/argo_cluster-template_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_delete/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_get/","text":"argo cluster-template get \u00b6 display details about a cluster workflow template argo cluster-template get CLUSTER WORKFLOW_TEMPLATE... [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template get"},{"location":"cli/argo_cluster-template_get/#argo-cluster-template-get","text":"display details about a cluster workflow template argo cluster-template get CLUSTER WORKFLOW_TEMPLATE... [flags]","title":"argo cluster-template get"},{"location":"cli/argo_cluster-template_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide","title":"Options"},{"location":"cli/argo_cluster-template_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_get/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_lint/","text":"argo cluster-template lint \u00b6 validate files or directories of cluster workflow template manifests argo cluster-template lint FILE... [flags] Options \u00b6 -h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template lint"},{"location":"cli/argo_cluster-template_lint/#argo-cluster-template-lint","text":"validate files or directories of cluster workflow template manifests argo cluster-template lint FILE... [flags]","title":"argo cluster-template lint"},{"location":"cli/argo_cluster-template_lint/#options","text":"-h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_cluster-template_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_lint/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_cluster-template_list/","text":"argo cluster-template list \u00b6 list cluster workflow templates argo cluster-template list [flags] Examples \u00b6 # List Cluster Workflow Templates: argo cluster-template list # List Cluster Workflow Templates with additional details such as labels, annotations, and status: argo cluster-template list --output wide # List Cluster Workflow Templates by name only: argo cluster-template list -o name Options \u00b6 -h, --help help for list -o, --output string Output format. One of: wide|name Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cluster-template - manipulate cluster workflow templates","title":"argo cluster-template list"},{"location":"cli/argo_cluster-template_list/#argo-cluster-template-list","text":"list cluster workflow templates argo cluster-template list [flags]","title":"argo cluster-template list"},{"location":"cli/argo_cluster-template_list/#examples","text":"# List Cluster Workflow Templates: argo cluster-template list # List Cluster Workflow Templates with additional details such as labels, annotations, and status: argo cluster-template list --output wide # List Cluster Workflow Templates by name only: argo cluster-template list -o name","title":"Examples"},{"location":"cli/argo_cluster-template_list/#options","text":"-h, --help help for list -o, --output string Output format. One of: wide|name","title":"Options"},{"location":"cli/argo_cluster-template_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cluster-template_list/#see-also","text":"argo cluster-template - manipulate cluster workflow templates","title":"SEE ALSO"},{"location":"cli/argo_completion/","text":"argo completion \u00b6 output shell completion code for the specified shell (bash or zsh) Synopsis \u00b6 Write bash or zsh shell completion code to standard output. For bash, ensure you have bash completions installed and enabled. To access completions in your current shell, run $ source <(argo completion bash) Alternatively, write it to a file and source in .bash_profile For zsh, output to a file in a directory referenced by the $fpath shell variable. argo completion SHELL [flags] Options \u00b6 -h, --help help for completion Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo completion"},{"location":"cli/argo_completion/#argo-completion","text":"output shell completion code for the specified shell (bash or zsh)","title":"argo completion"},{"location":"cli/argo_completion/#synopsis","text":"Write bash or zsh shell completion code to standard output. For bash, ensure you have bash completions installed and enabled. To access completions in your current shell, run $ source <(argo completion bash) Alternatively, write it to a file and source in .bash_profile For zsh, output to a file in a directory referenced by the $fpath shell variable. argo completion SHELL [flags]","title":"Synopsis"},{"location":"cli/argo_completion/#options","text":"-h, --help help for completion","title":"Options"},{"location":"cli/argo_completion/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_completion/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_cp/","text":"argo cp \u00b6 copy artifacts from workflow argo cp my-wf output-directory ... [flags] Examples \u00b6 # Copy a workflow's artifacts to a local output directory: argo cp my-wf output-directory # Copy artifacts from a specific node in a workflow to a local output directory: argo cp my-wf output-directory --node-id=my-wf-node-id-123 Options \u00b6 --artifact-name string name of output artifact in workflow -h, --help help for cp -n, --namespace string namespace of workflow --node-id string id of node in workflow --path string use variables {workflowName}, {nodeId}, {templateName}, {artifactName}, and {namespace} to create a customized path to store the artifacts; example: {workflowName}/{templateName}/{artifactName} (default \"{namespace}/{workflowName}/{nodeId}/outputs/{artifactName}\") --template-name string name of template in workflow Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo cp"},{"location":"cli/argo_cp/#argo-cp","text":"copy artifacts from workflow argo cp my-wf output-directory ... [flags]","title":"argo cp"},{"location":"cli/argo_cp/#examples","text":"# Copy a workflow's artifacts to a local output directory: argo cp my-wf output-directory # Copy artifacts from a specific node in a workflow to a local output directory: argo cp my-wf output-directory --node-id=my-wf-node-id-123","title":"Examples"},{"location":"cli/argo_cp/#options","text":"--artifact-name string name of output artifact in workflow -h, --help help for cp -n, --namespace string namespace of workflow --node-id string id of node in workflow --path string use variables {workflowName}, {nodeId}, {templateName}, {artifactName}, and {namespace} to create a customized path to store the artifacts; example: {workflowName}/{templateName}/{artifactName} (default \"{namespace}/{workflowName}/{nodeId}/outputs/{artifactName}\") --template-name string name of template in workflow","title":"Options"},{"location":"cli/argo_cp/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cp/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_cron/","text":"argo cron \u00b6 manage cron workflows Synopsis \u00b6 NextScheduledRun assumes that the workflow-controller uses UTC as its timezone argo cron [flags] Options \u00b6 -h, --help help for cron Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo cron create - create a cron workflow argo cron delete - delete a cron workflow argo cron get - display details about a cron workflow argo cron lint - validate files or directories of cron workflow manifests argo cron list - list cron workflows argo cron resume - resume zero or more cron workflows argo cron suspend - suspend zero or more cron workflows","title":"argo cron"},{"location":"cli/argo_cron/#argo-cron","text":"manage cron workflows","title":"argo cron"},{"location":"cli/argo_cron/#synopsis","text":"NextScheduledRun assumes that the workflow-controller uses UTC as its timezone argo cron [flags]","title":"Synopsis"},{"location":"cli/argo_cron/#options","text":"-h, --help help for cron","title":"Options"},{"location":"cli/argo_cron/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron/#see-also","text":"argo - argo is the command line interface to Argo argo cron create - create a cron workflow argo cron delete - delete a cron workflow argo cron get - display details about a cron workflow argo cron lint - validate files or directories of cron workflow manifests argo cron list - list cron workflows argo cron resume - resume zero or more cron workflows argo cron suspend - suspend zero or more cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_create/","text":"argo cron create \u00b6 create a cron workflow argo cron create FILE1 FILE2... [flags] Options \u00b6 --entrypoint string override entrypoint --generate-name string override metadata.generateName -h, --help help for create -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --name string override metadata.name -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --schedule string override cron workflow schedule --serviceaccount string run all pods in the workflow using specified serviceaccount --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron create"},{"location":"cli/argo_cron_create/#argo-cron-create","text":"create a cron workflow argo cron create FILE1 FILE2... [flags]","title":"argo cron create"},{"location":"cli/argo_cron_create/#options","text":"--entrypoint string override entrypoint --generate-name string override metadata.generateName -h, --help help for create -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --name string override metadata.name -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --schedule string override cron workflow schedule --serviceaccount string run all pods in the workflow using specified serviceaccount --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_cron_create/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_create/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_delete/","text":"argo cron delete \u00b6 delete a cron workflow argo cron delete [CRON_WORKFLOW... | --all] [flags] Options \u00b6 --all Delete all cron workflows -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron delete"},{"location":"cli/argo_cron_delete/#argo-cron-delete","text":"delete a cron workflow argo cron delete [CRON_WORKFLOW... | --all] [flags]","title":"argo cron delete"},{"location":"cli/argo_cron_delete/#options","text":"--all Delete all cron workflows -h, --help help for delete","title":"Options"},{"location":"cli/argo_cron_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_delete/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_get/","text":"argo cron get \u00b6 display details about a cron workflow argo cron get CRON_WORKFLOW... [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron get"},{"location":"cli/argo_cron_get/#argo-cron-get","text":"display details about a cron workflow argo cron get CRON_WORKFLOW... [flags]","title":"argo cron get"},{"location":"cli/argo_cron_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide","title":"Options"},{"location":"cli/argo_cron_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_get/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_lint/","text":"argo cron lint \u00b6 validate files or directories of cron workflow manifests argo cron lint FILE... [flags] Options \u00b6 -h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron lint"},{"location":"cli/argo_cron_lint/#argo-cron-lint","text":"validate files or directories of cron workflow manifests argo cron lint FILE... [flags]","title":"argo cron lint"},{"location":"cli/argo_cron_lint/#options","text":"-h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict validation (default true)","title":"Options"},{"location":"cli/argo_cron_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_lint/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_list/","text":"argo cron list \u00b6 list cron workflows argo cron list [flags] Options \u00b6 -A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron list"},{"location":"cli/argo_cron_list/#argo-cron-list","text":"list cron workflows argo cron list [flags]","title":"argo cron list"},{"location":"cli/argo_cron_list/#options","text":"-A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.","title":"Options"},{"location":"cli/argo_cron_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_list/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_resume/","text":"argo cron resume \u00b6 resume zero or more cron workflows argo cron resume [CRON_WORKFLOW...] [flags] Options \u00b6 -h, --help help for resume Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron resume"},{"location":"cli/argo_cron_resume/#argo-cron-resume","text":"resume zero or more cron workflows argo cron resume [CRON_WORKFLOW...] [flags]","title":"argo cron resume"},{"location":"cli/argo_cron_resume/#options","text":"-h, --help help for resume","title":"Options"},{"location":"cli/argo_cron_resume/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_resume/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_cron_suspend/","text":"argo cron suspend \u00b6 suspend zero or more cron workflows argo cron suspend CRON_WORKFLOW... [flags] Options \u00b6 -h, --help help for suspend Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo cron - manage cron workflows","title":"argo cron suspend"},{"location":"cli/argo_cron_suspend/#argo-cron-suspend","text":"suspend zero or more cron workflows argo cron suspend CRON_WORKFLOW... [flags]","title":"argo cron suspend"},{"location":"cli/argo_cron_suspend/#options","text":"-h, --help help for suspend","title":"Options"},{"location":"cli/argo_cron_suspend/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_cron_suspend/#see-also","text":"argo cron - manage cron workflows","title":"SEE ALSO"},{"location":"cli/argo_delete/","text":"argo delete \u00b6 delete workflows argo delete [--dry-run] [WORKFLOW...|[--all] [--older] [--completed] [--resubmitted] [--prefix PREFIX] [--selector SELECTOR] [--force] [--status STATUS] ] [flags] Examples \u00b6 # Delete a workflow: argo delete my-wf # Delete the latest workflow: argo delete @latest Options \u00b6 --all Delete all workflows -A, --all-namespaces Delete workflows from all namespaces --completed Delete completed workflows --dry-run Do not delete the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. --force Force delete workflows by removing finalizers -h, --help help for delete --older string Delete completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) --prefix string Delete workflows by prefix --query-chunk-size int Run the list query in chunks (deletes will still be executed individually) --resubmitted Delete resubmitted workflows -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --status strings Delete by status (comma separated) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo delete"},{"location":"cli/argo_delete/#argo-delete","text":"delete workflows argo delete [--dry-run] [WORKFLOW...|[--all] [--older] [--completed] [--resubmitted] [--prefix PREFIX] [--selector SELECTOR] [--force] [--status STATUS] ] [flags]","title":"argo delete"},{"location":"cli/argo_delete/#examples","text":"# Delete a workflow: argo delete my-wf # Delete the latest workflow: argo delete @latest","title":"Examples"},{"location":"cli/argo_delete/#options","text":"--all Delete all workflows -A, --all-namespaces Delete workflows from all namespaces --completed Delete completed workflows --dry-run Do not delete the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. --force Force delete workflows by removing finalizers -h, --help help for delete --older string Delete completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) --prefix string Delete workflows by prefix --query-chunk-size int Run the list query in chunks (deletes will still be executed individually) --resubmitted Delete resubmitted workflows -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --status strings Delete by status (comma separated)","title":"Options"},{"location":"cli/argo_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_delete/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_executor-plugin/","text":"argo executor-plugin \u00b6 manage executor plugins argo executor-plugin [flags] Options \u00b6 -h, --help help for executor-plugin Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo executor-plugin build - build an executor plugin","title":"argo executor-plugin"},{"location":"cli/argo_executor-plugin/#argo-executor-plugin","text":"manage executor plugins argo executor-plugin [flags]","title":"argo executor-plugin"},{"location":"cli/argo_executor-plugin/#options","text":"-h, --help help for executor-plugin","title":"Options"},{"location":"cli/argo_executor-plugin/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_executor-plugin/#see-also","text":"argo - argo is the command line interface to Argo argo executor-plugin build - build an executor plugin","title":"SEE ALSO"},{"location":"cli/argo_executor-plugin_build/","text":"argo executor-plugin build \u00b6 build an executor plugin argo executor-plugin build DIR [flags] Options \u00b6 -h, --help help for build Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo executor-plugin - manage executor plugins","title":"argo executor-plugin build"},{"location":"cli/argo_executor-plugin_build/#argo-executor-plugin-build","text":"build an executor plugin argo executor-plugin build DIR [flags]","title":"argo executor-plugin build"},{"location":"cli/argo_executor-plugin_build/#options","text":"-h, --help help for build","title":"Options"},{"location":"cli/argo_executor-plugin_build/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_executor-plugin_build/#see-also","text":"argo executor-plugin - manage executor plugins","title":"SEE ALSO"},{"location":"cli/argo_get/","text":"argo get \u00b6 display details about a workflow argo get WORKFLOW... [flags] Examples \u00b6 # Get information about a workflow: argo get my-wf # Get the latest workflow: argo get @latest Options \u00b6 -h, --help help for get --no-color Disable colorized output --no-utf8 Use plain 7-bits ascii characters --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: json|yaml|short|wide --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo get"},{"location":"cli/argo_get/#argo-get","text":"display details about a workflow argo get WORKFLOW... [flags]","title":"argo get"},{"location":"cli/argo_get/#examples","text":"# Get information about a workflow: argo get my-wf # Get the latest workflow: argo get @latest","title":"Examples"},{"location":"cli/argo_get/#options","text":"-h, --help help for get --no-color Disable colorized output --no-utf8 Use plain 7-bits ascii characters --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: json|yaml|short|wide --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error)","title":"Options"},{"location":"cli/argo_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_get/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_lint/","text":"argo lint \u00b6 validate files or directories of manifests argo lint FILE... [flags] Examples \u00b6 # Lint all manifests in a specified directory: argo lint ./manifests # Lint only manifests of Workflows and CronWorkflows from stdin: cat manifests.yaml | argo lint --kinds=workflows,cronworkflows - Options \u00b6 -h, --help help for lint --kinds strings Which kinds will be linted. Can be: workflows|workflowtemplates|cronworkflows|clusterworkflowtemplates (default [all]) --offline perform offline linting. For resources referencing other resources, the references will be resolved from the provided args -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict Perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo lint"},{"location":"cli/argo_lint/#argo-lint","text":"validate files or directories of manifests argo lint FILE... [flags]","title":"argo lint"},{"location":"cli/argo_lint/#examples","text":"# Lint all manifests in a specified directory: argo lint ./manifests # Lint only manifests of Workflows and CronWorkflows from stdin: cat manifests.yaml | argo lint --kinds=workflows,cronworkflows -","title":"Examples"},{"location":"cli/argo_lint/#options","text":"-h, --help help for lint --kinds strings Which kinds will be linted. Can be: workflows|workflowtemplates|cronworkflows|clusterworkflowtemplates (default [all]) --offline perform offline linting. For resources referencing other resources, the references will be resolved from the provided args -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict Perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_lint/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_list/","text":"argo list \u00b6 list workflows argo list [flags] Examples \u00b6 # List all workflows: argo list # List all workflows from all namespaces: argo list -A # List all running workflows: argo list --running # List all completed workflows: argo list --completed # List workflows created within the last 10m: argo list --since 10m # List workflows that finished more than 2h ago: argo list --older 2h # List workflows with more information (such as parameters): argo list -o wide # List workflows in YAML format: argo list -o yaml # List workflows that have both labels: argo list -l label1=value1,label2=value2 Options \u00b6 -A, --all-namespaces Show workflows from all namespaces --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. --completed Show completed workflows. Mutually exclusive with --running. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for list --no-headers Don't print headers (default print headers). --older string List completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) -o, --output string Output format. One of: name|wide|yaml|json --prefix string Filter workflows by prefix --resubmitted Show resubmitted workflows --running Show running workflows. Mutually exclusive with --completed. -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --since string Show only workflows created after than a relative duration --status strings Filter by status (comma separated) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo list"},{"location":"cli/argo_list/#argo-list","text":"list workflows argo list [flags]","title":"argo list"},{"location":"cli/argo_list/#examples","text":"# List all workflows: argo list # List all workflows from all namespaces: argo list -A # List all running workflows: argo list --running # List all completed workflows: argo list --completed # List workflows created within the last 10m: argo list --since 10m # List workflows that finished more than 2h ago: argo list --older 2h # List workflows with more information (such as parameters): argo list -o wide # List workflows in YAML format: argo list -o yaml # List workflows that have both labels: argo list -l label1=value1,label2=value2","title":"Examples"},{"location":"cli/argo_list/#options","text":"-A, --all-namespaces Show workflows from all namespaces --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. --completed Show completed workflows. Mutually exclusive with --running. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for list --no-headers Don't print headers (default print headers). --older string List completed workflows finished before the specified duration (e.g. 10m, 3h, 1d) -o, --output string Output format. One of: name|wide|yaml|json --prefix string Filter workflows by prefix --resubmitted Show resubmitted workflows --running Show running workflows. Mutually exclusive with --completed. -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --since string Show only workflows created after than a relative duration --status strings Filter by status (comma separated)","title":"Options"},{"location":"cli/argo_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_list/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_logs/","text":"argo logs \u00b6 view logs of a pod or workflow argo logs WORKFLOW [POD] [flags] Examples \u00b6 # Print the logs of a workflow: argo logs my-wf # Follow the logs of a workflows: argo logs my-wf --follow # Print the logs of a workflows with a selector: argo logs my-wf -l app=sth # Print the logs of single container in a pod argo logs my-wf my-pod -c my-container # Print the logs of a workflow's pods: argo logs my-wf my-pod # Print the logs of a pods: argo logs --since=1h my-pod # Print the logs of the latest workflow: argo logs @latest Options \u00b6 -c, --container string Print the logs of this container (default \"main\") -f, --follow Specify if the logs should be streamed. --grep string grep for lines -h, --help help for logs --no-color Disable colorized output -p, --previous Specify if the previously terminated container logs should be returned. -l, --selector string log selector for some pod --since duration Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used. --since-time string Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used. --tail int If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime (default -1) --timestamps Include timestamps on each line in the log output Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo logs"},{"location":"cli/argo_logs/#argo-logs","text":"view logs of a pod or workflow argo logs WORKFLOW [POD] [flags]","title":"argo logs"},{"location":"cli/argo_logs/#examples","text":"# Print the logs of a workflow: argo logs my-wf # Follow the logs of a workflows: argo logs my-wf --follow # Print the logs of a workflows with a selector: argo logs my-wf -l app=sth # Print the logs of single container in a pod argo logs my-wf my-pod -c my-container # Print the logs of a workflow's pods: argo logs my-wf my-pod # Print the logs of a pods: argo logs --since=1h my-pod # Print the logs of the latest workflow: argo logs @latest","title":"Examples"},{"location":"cli/argo_logs/#options","text":"-c, --container string Print the logs of this container (default \"main\") -f, --follow Specify if the logs should be streamed. --grep string grep for lines -h, --help help for logs --no-color Disable colorized output -p, --previous Specify if the previously terminated container logs should be returned. -l, --selector string log selector for some pod --since duration Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used. --since-time string Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used. --tail int If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime (default -1) --timestamps Include timestamps on each line in the log output","title":"Options"},{"location":"cli/argo_logs/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_logs/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_node/","text":"argo node \u00b6 perform action on a node in a workflow argo node ACTION WORKFLOW FLAGS [flags] Examples \u00b6 # Set outputs to a node within a workflow: argo node set my-wf --output-parameter parameter-name=\"Hello, world!\" --node-field-selector displayName=approve # Set the message of a node within a workflow: argo node set my-wf --message \"We did it!\"\" --node-field-selector displayName=approve Options \u00b6 -h, --help help for node -m, --message string Set the message of a node, eg: --message \"Hello, world!\" --node-field-selector string Selector of node to set, eg: --node-field-selector inputs.paramaters.myparam.value=abc -p, --output-parameter stringArray Set a \"supplied\" output parameter of node, eg: --output-parameter parameter-name=\"Hello, world!\" --phase string Phase to set the node to, eg: --phase Succeeded Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo node"},{"location":"cli/argo_node/#argo-node","text":"perform action on a node in a workflow argo node ACTION WORKFLOW FLAGS [flags]","title":"argo node"},{"location":"cli/argo_node/#examples","text":"# Set outputs to a node within a workflow: argo node set my-wf --output-parameter parameter-name=\"Hello, world!\" --node-field-selector displayName=approve # Set the message of a node within a workflow: argo node set my-wf --message \"We did it!\"\" --node-field-selector displayName=approve","title":"Examples"},{"location":"cli/argo_node/#options","text":"-h, --help help for node -m, --message string Set the message of a node, eg: --message \"Hello, world!\" --node-field-selector string Selector of node to set, eg: --node-field-selector inputs.paramaters.myparam.value=abc -p, --output-parameter stringArray Set a \"supplied\" output parameter of node, eg: --output-parameter parameter-name=\"Hello, world!\" --phase string Phase to set the node to, eg: --phase Succeeded","title":"Options"},{"location":"cli/argo_node/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_node/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_resubmit/","text":"argo resubmit \u00b6 resubmit one or more workflows Synopsis \u00b6 Submit a completed workflow again. Optionally override parameters and memoize. Similar to running argo submit again with the same parameters. argo resubmit [WORKFLOW...] [flags] Examples \u00b6 # Resubmit a workflow: argo resubmit my-wf # Resubmit multiple workflows: argo resubmit my-wf my-other-wf my-third-wf # Resubmit multiple workflows by label selector: argo resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo resubmit --wait my-wf.yaml # Resubmit and watch until completion: argo resubmit --watch my-wf.yaml # Resubmit and tail logs until completion: argo resubmit --log my-wf.yaml # Resubmit the latest workflow: argo resubmit @latest Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo resubmit"},{"location":"cli/argo_resubmit/#argo-resubmit","text":"resubmit one or more workflows","title":"argo resubmit"},{"location":"cli/argo_resubmit/#synopsis","text":"Submit a completed workflow again. Optionally override parameters and memoize. Similar to running argo submit again with the same parameters. argo resubmit [WORKFLOW...] [flags]","title":"Synopsis"},{"location":"cli/argo_resubmit/#examples","text":"# Resubmit a workflow: argo resubmit my-wf # Resubmit multiple workflows: argo resubmit my-wf my-other-wf my-third-wf # Resubmit multiple workflows by label selector: argo resubmit -l workflows.argoproj.io/test=true # Resubmit multiple workflows by field selector: argo resubmit --field-selector metadata.namespace=argo # Resubmit and wait for completion: argo resubmit --wait my-wf.yaml # Resubmit and watch until completion: argo resubmit --watch my-wf.yaml # Resubmit and tail logs until completion: argo resubmit --log my-wf.yaml # Resubmit the latest workflow: argo resubmit @latest","title":"Examples"},{"location":"cli/argo_resubmit/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for resubmit --log log the workflow until it completes --memoized re-use successful steps & outputs from the previous run -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --priority int32 workflow priority -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is resubmitted --watch watch the workflow until it completes, only works when a single workflow is resubmitted","title":"Options"},{"location":"cli/argo_resubmit/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_resubmit/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_resume/","text":"argo resume \u00b6 resume zero or more workflows (opposite of suspend) argo resume WORKFLOW1 WORKFLOW2... [flags] Examples \u00b6 # Resume a workflow that has been suspended: argo resume my-wf # Resume multiple workflows: argo resume my-wf my-other-wf my-third-wf # Resume the latest workflow: argo resume @latest # Resume multiple workflows by node field selector: argo resume --node-field-selector inputs.paramaters.myparam.value=abc Options \u00b6 -h, --help help for resume --node-field-selector string selector of node to resume, eg: --node-field-selector inputs.paramaters.myparam.value=abc Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo resume"},{"location":"cli/argo_resume/#argo-resume","text":"resume zero or more workflows (opposite of suspend) argo resume WORKFLOW1 WORKFLOW2... [flags]","title":"argo resume"},{"location":"cli/argo_resume/#examples","text":"# Resume a workflow that has been suspended: argo resume my-wf # Resume multiple workflows: argo resume my-wf my-other-wf my-third-wf # Resume the latest workflow: argo resume @latest # Resume multiple workflows by node field selector: argo resume --node-field-selector inputs.paramaters.myparam.value=abc","title":"Examples"},{"location":"cli/argo_resume/#options","text":"-h, --help help for resume --node-field-selector string selector of node to resume, eg: --node-field-selector inputs.paramaters.myparam.value=abc","title":"Options"},{"location":"cli/argo_resume/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_resume/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_retry/","text":"argo retry \u00b6 retry zero or more workflows Synopsis \u00b6 Rerun a failed Workflow. Specifically, rerun all failed steps. The same Workflow object is used and no new Workflows are created. argo retry [WORKFLOW...] [flags] Examples \u00b6 # Retry a workflow: argo retry my-wf # Retry multiple workflows: argo retry my-wf my-other-wf my-third-wf # Retry multiple workflows by label selector: argo retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo retry --wait my-wf.yaml # Retry and watch until completion: argo retry --watch my-wf.yaml # Retry and tail logs until completion: argo retry --log my-wf.yaml # Retry the latest workflow: argo retry @latest # Restart node with id 5 on successful workflow, using node-field-selector argo retry my-wf --restart-successful --node-field-selector id=5 Options \u00b6 --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo retry"},{"location":"cli/argo_retry/#argo-retry","text":"retry zero or more workflows","title":"argo retry"},{"location":"cli/argo_retry/#synopsis","text":"Rerun a failed Workflow. Specifically, rerun all failed steps. The same Workflow object is used and no new Workflows are created. argo retry [WORKFLOW...] [flags]","title":"Synopsis"},{"location":"cli/argo_retry/#examples","text":"# Retry a workflow: argo retry my-wf # Retry multiple workflows: argo retry my-wf my-other-wf my-third-wf # Retry multiple workflows by label selector: argo retry -l workflows.argoproj.io/test=true # Retry multiple workflows by field selector: argo retry --field-selector metadata.namespace=argo # Retry and wait for completion: argo retry --wait my-wf.yaml # Retry and watch until completion: argo retry --watch my-wf.yaml # Retry and tail logs until completion: argo retry --log my-wf.yaml # Retry the latest workflow: argo retry @latest # Restart node with id 5 on successful workflow, using node-field-selector argo retry my-wf --restart-successful --node-field-selector id=5","title":"Examples"},{"location":"cli/argo_retry/#options","text":"--field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for retry --log log the workflow until it completes --node-field-selector string selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray input parameter to override on the original workflow spec --restart-successful indicates to restart successful nodes matching the --node-field-selector -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) -w, --wait wait for the workflow to complete, only works when a single workflow is retried --watch watch the workflow until it completes, only works when a single workflow is retried","title":"Options"},{"location":"cli/argo_retry/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_retry/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_server/","text":"argo server \u00b6 start the Argo Server argo server [flags] Examples \u00b6 See https://argoproj.github.io/argo-workflows/argo-server/ Options \u00b6 --access-control-allow-origin string Set Access-Control-Allow-Origin header in HTTP responses. --allowed-link-protocol stringArray Allowed link protocol in configMap. Used if the allowed configMap links protocol are different from http,https. Defaults to the environment variable ALLOWED_LINK_PROTOCOL (default [http,https]) --api-rate-limit uint Set limit per IP for api ratelimiter (default 1000) --auth-mode stringArray API server authentication mode. Any 1 or more length permutation of: client,server,sso (default [client]) --basehref string Value for base href in index.html. Used if the server is running behind reverse proxy under subpath different from /. Defaults to the environment variable BASE_HREF. (default \"/\") -b, --browser enable automatic launching of the browser [local mode] --configmap string Name of K8s configmap to retrieve workflow controller configuration (default \"workflow-controller-configmap\") --event-async-dispatch dispatch event async --event-operation-queue-size int how many events operations that can be queued at once (default 16) --event-worker-count int how many event workers to run (default 4) -h, --help help for server --hsts Whether or not we should add a HTTP Secure Transport Security header. This only has effect if secure is enabled. (default true) --kube-api-burst int Burst to use while talking with kube-apiserver. (default 30) --kube-api-qps float32 QPS to use while talking with kube-apiserver. (default 20) --log-format string The formatter to use for logs. One of: text|json (default \"text\") --managed-namespace string namespace that watches, default to the installation namespace --namespaced run as namespaced mode -p, --port int Port to listen on (default 2746) -e, --secure Whether or not we should listen on TLS. (default true) --tls-certificate-secret-name string The name of a Kubernetes secret that contains the server certificates --x-frame-options string Set X-Frame-Options header in HTTP responses. (default \"DENY\") Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo server"},{"location":"cli/argo_server/#argo-server","text":"start the Argo Server argo server [flags]","title":"argo server"},{"location":"cli/argo_server/#examples","text":"See https://argoproj.github.io/argo-workflows/argo-server/","title":"Examples"},{"location":"cli/argo_server/#options","text":"--access-control-allow-origin string Set Access-Control-Allow-Origin header in HTTP responses. --allowed-link-protocol stringArray Allowed link protocol in configMap. Used if the allowed configMap links protocol are different from http,https. Defaults to the environment variable ALLOWED_LINK_PROTOCOL (default [http,https]) --api-rate-limit uint Set limit per IP for api ratelimiter (default 1000) --auth-mode stringArray API server authentication mode. Any 1 or more length permutation of: client,server,sso (default [client]) --basehref string Value for base href in index.html. Used if the server is running behind reverse proxy under subpath different from /. Defaults to the environment variable BASE_HREF. (default \"/\") -b, --browser enable automatic launching of the browser [local mode] --configmap string Name of K8s configmap to retrieve workflow controller configuration (default \"workflow-controller-configmap\") --event-async-dispatch dispatch event async --event-operation-queue-size int how many events operations that can be queued at once (default 16) --event-worker-count int how many event workers to run (default 4) -h, --help help for server --hsts Whether or not we should add a HTTP Secure Transport Security header. This only has effect if secure is enabled. (default true) --kube-api-burst int Burst to use while talking with kube-apiserver. (default 30) --kube-api-qps float32 QPS to use while talking with kube-apiserver. (default 20) --log-format string The formatter to use for logs. One of: text|json (default \"text\") --managed-namespace string namespace that watches, default to the installation namespace --namespaced run as namespaced mode -p, --port int Port to listen on (default 2746) -e, --secure Whether or not we should listen on TLS. (default true) --tls-certificate-secret-name string The name of a Kubernetes secret that contains the server certificates --x-frame-options string Set X-Frame-Options header in HTTP responses. (default \"DENY\")","title":"Options"},{"location":"cli/argo_server/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_server/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_stop/","text":"argo stop \u00b6 stop zero or more workflows allowing all exit handlers to run Synopsis \u00b6 Stop a workflow but still run exit handlers. argo stop WORKFLOW WORKFLOW2... [flags] Examples \u00b6 # Stop a workflow: argo stop my-wf # Stop the latest workflow: argo stop @latest # Stop multiple workflows by label selector argo stop -l workflows.argoproj.io/test=true # Stop multiple workflows by field selector argo stop --field-selector metadata.namespace=argo Options \u00b6 --dry-run If true, only print the workflows that would be stopped, without stopping them. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for stop --message string Message to add to previously running nodes --node-field-selector string selector of node to stop, eg: --node-field-selector inputs.paramaters.myparam.value=abc -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo stop"},{"location":"cli/argo_stop/#argo-stop","text":"stop zero or more workflows allowing all exit handlers to run","title":"argo stop"},{"location":"cli/argo_stop/#synopsis","text":"Stop a workflow but still run exit handlers. argo stop WORKFLOW WORKFLOW2... [flags]","title":"Synopsis"},{"location":"cli/argo_stop/#examples","text":"# Stop a workflow: argo stop my-wf # Stop the latest workflow: argo stop @latest # Stop multiple workflows by label selector argo stop -l workflows.argoproj.io/test=true # Stop multiple workflows by field selector argo stop --field-selector metadata.namespace=argo","title":"Examples"},{"location":"cli/argo_stop/#options","text":"--dry-run If true, only print the workflows that would be stopped, without stopping them. --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for stop --message string Message to add to previously running nodes --node-field-selector string selector of node to stop, eg: --node-field-selector inputs.paramaters.myparam.value=abc -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)","title":"Options"},{"location":"cli/argo_stop/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_stop/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_submit/","text":"argo submit \u00b6 submit a workflow argo submit [FILE... | --from `kind/name] [flags] Examples \u00b6 # Submit multiple workflows from files: argo submit my-wf.yaml # Submit and wait for completion: argo submit --wait my-wf.yaml # Submit and watch until completion: argo submit --watch my-wf.yaml # Submit and tail logs until completion: argo submit --log my-wf.yaml # Submit a single workflow from an existing resource argo submit --from cronwf/my-cron-wf Options \u00b6 --dry-run modify the workflow on the client-side without creating it --entrypoint string override entrypoint --from kind/name Submit from an existing kind/name E.g., --from=cronwf/hello-world-cwf --generate-name string override metadata.generateName -h, --help help for submit -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --log log the workflow until it completes --name string override metadata.name --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --priority int32 workflow priority --scheduled-time string Override the workflow's scheduledTime parameter (useful for backfilling). The time must be RFC3339 --server-dry-run send request to server with dry-run flag which will modify the workflow without creating it --serviceaccount string run all pods in the workflow using specified serviceaccount --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error). Should only be used with --watch. --strict perform strict workflow validation (default true) -w, --wait wait for the workflow to complete --watch watch the workflow until it completes Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo submit"},{"location":"cli/argo_submit/#argo-submit","text":"submit a workflow argo submit [FILE... | --from `kind/name] [flags]","title":"argo submit"},{"location":"cli/argo_submit/#examples","text":"# Submit multiple workflows from files: argo submit my-wf.yaml # Submit and wait for completion: argo submit --wait my-wf.yaml # Submit and watch until completion: argo submit --watch my-wf.yaml # Submit and tail logs until completion: argo submit --log my-wf.yaml # Submit a single workflow from an existing resource argo submit --from cronwf/my-cron-wf","title":"Examples"},{"location":"cli/argo_submit/#options","text":"--dry-run modify the workflow on the client-side without creating it --entrypoint string override entrypoint --from kind/name Submit from an existing kind/name E.g., --from=cronwf/hello-world-cwf --generate-name string override metadata.generateName -h, --help help for submit -l, --labels string Comma separated labels to apply to the workflow. Will override previous values. --log log the workflow until it completes --name string override metadata.name --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc -o, --output string Output format. One of: name|json|yaml|wide -p, --parameter stringArray pass an input parameter -f, --parameter-file string pass a file containing all input parameters --priority int32 workflow priority --scheduled-time string Override the workflow's scheduledTime parameter (useful for backfilling). The time must be RFC3339 --server-dry-run send request to server with dry-run flag which will modify the workflow without creating it --serviceaccount string run all pods in the workflow using specified serviceaccount --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error). Should only be used with --watch. --strict perform strict workflow validation (default true) -w, --wait wait for the workflow to complete --watch watch the workflow until it completes","title":"Options"},{"location":"cli/argo_submit/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_submit/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_suspend/","text":"argo suspend \u00b6 suspend zero or more workflows (opposite of resume) argo suspend WORKFLOW1 WORKFLOW2... [flags] Examples \u00b6 # Suspend a workflow: argo suspend my-wf # Suspend the latest workflow: argo suspend @latest Options \u00b6 -h, --help help for suspend Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo suspend"},{"location":"cli/argo_suspend/#argo-suspend","text":"suspend zero or more workflows (opposite of resume) argo suspend WORKFLOW1 WORKFLOW2... [flags]","title":"argo suspend"},{"location":"cli/argo_suspend/#examples","text":"# Suspend a workflow: argo suspend my-wf # Suspend the latest workflow: argo suspend @latest","title":"Examples"},{"location":"cli/argo_suspend/#options","text":"-h, --help help for suspend","title":"Options"},{"location":"cli/argo_suspend/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_suspend/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_template/","text":"argo template \u00b6 manipulate workflow templates argo template [flags] Options \u00b6 -h, --help help for template Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo argo template create - create a workflow template argo template delete - delete a workflow template argo template get - display details about a workflow template argo template lint - validate a file or directory of workflow template manifests argo template list - list workflow templates","title":"argo template"},{"location":"cli/argo_template/#argo-template","text":"manipulate workflow templates argo template [flags]","title":"argo template"},{"location":"cli/argo_template/#options","text":"-h, --help help for template","title":"Options"},{"location":"cli/argo_template/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template/#see-also","text":"argo - argo is the command line interface to Argo argo template create - create a workflow template argo template delete - delete a workflow template argo template get - display details about a workflow template argo template lint - validate a file or directory of workflow template manifests argo template list - list workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_create/","text":"argo template create \u00b6 create a workflow template argo template create FILE1 FILE2... [flags] Options \u00b6 -h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template create"},{"location":"cli/argo_template_create/#argo-template-create","text":"create a workflow template argo template create FILE1 FILE2... [flags]","title":"argo template create"},{"location":"cli/argo_template_create/#options","text":"-h, --help help for create -o, --output string Output format. One of: name|json|yaml|wide --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_template_create/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_create/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_delete/","text":"argo template delete \u00b6 delete a workflow template argo template delete WORKFLOW_TEMPLATE [flags] Options \u00b6 --all Delete all workflow templates -h, --help help for delete Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template delete"},{"location":"cli/argo_template_delete/#argo-template-delete","text":"delete a workflow template argo template delete WORKFLOW_TEMPLATE [flags]","title":"argo template delete"},{"location":"cli/argo_template_delete/#options","text":"--all Delete all workflow templates -h, --help help for delete","title":"Options"},{"location":"cli/argo_template_delete/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_delete/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_get/","text":"argo template get \u00b6 display details about a workflow template argo template get WORKFLOW_TEMPLATE... [flags] Options \u00b6 -h, --help help for get -o, --output string Output format. One of: json|yaml|wide Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template get"},{"location":"cli/argo_template_get/#argo-template-get","text":"display details about a workflow template argo template get WORKFLOW_TEMPLATE... [flags]","title":"argo template get"},{"location":"cli/argo_template_get/#options","text":"-h, --help help for get -o, --output string Output format. One of: json|yaml|wide","title":"Options"},{"location":"cli/argo_template_get/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_get/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_lint/","text":"argo template lint \u00b6 validate a file or directory of workflow template manifests argo template lint (DIRECTORY | FILE1 FILE2 FILE3...) [flags] Options \u00b6 -h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template lint"},{"location":"cli/argo_template_lint/#argo-template-lint","text":"validate a file or directory of workflow template manifests argo template lint (DIRECTORY | FILE1 FILE2 FILE3...) [flags]","title":"argo template lint"},{"location":"cli/argo_template_lint/#options","text":"-h, --help help for lint -o, --output string Linting results output format. One of: pretty|simple (default \"pretty\") --strict perform strict workflow validation (default true)","title":"Options"},{"location":"cli/argo_template_lint/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_lint/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_template_list/","text":"argo template list \u00b6 list workflow templates argo template list [flags] Options \u00b6 -A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo template - manipulate workflow templates","title":"argo template list"},{"location":"cli/argo_template_list/#argo-template-list","text":"list workflow templates argo template list [flags]","title":"argo template list"},{"location":"cli/argo_template_list/#options","text":"-A, --all-namespaces Show workflows from all namespaces -h, --help help for list -o, --output string Output format. One of: wide|name","title":"Options"},{"location":"cli/argo_template_list/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_template_list/#see-also","text":"argo template - manipulate workflow templates","title":"SEE ALSO"},{"location":"cli/argo_terminate/","text":"argo terminate \u00b6 terminate zero or more workflows immediately Synopsis \u00b6 Immediately stop a workflow and do not run any exit handlers. argo terminate WORKFLOW WORKFLOW2... [flags] Examples \u00b6 # Terminate a workflow: argo terminate my-wf # Terminate the latest workflow: argo terminate @latest # Terminate multiple workflows by label selector argo terminate -l workflows.argoproj.io/test=true # Terminate multiple workflows by field selector argo terminate --field-selector metadata.namespace=argo Options \u00b6 --dry-run Do not terminate the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for terminate -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo terminate"},{"location":"cli/argo_terminate/#argo-terminate","text":"terminate zero or more workflows immediately","title":"argo terminate"},{"location":"cli/argo_terminate/#synopsis","text":"Immediately stop a workflow and do not run any exit handlers. argo terminate WORKFLOW WORKFLOW2... [flags]","title":"Synopsis"},{"location":"cli/argo_terminate/#examples","text":"# Terminate a workflow: argo terminate my-wf # Terminate the latest workflow: argo terminate @latest # Terminate multiple workflows by label selector argo terminate -l workflows.argoproj.io/test=true # Terminate multiple workflows by field selector argo terminate --field-selector metadata.namespace=argo","title":"Examples"},{"location":"cli/argo_terminate/#options","text":"--dry-run Do not terminate the workflow, only print what would happen --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type. -h, --help help for terminate -l, --selector string Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)","title":"Options"},{"location":"cli/argo_terminate/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_terminate/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_version/","text":"argo version \u00b6 print version information argo version [flags] Options \u00b6 -h, --help help for version --short print just the version number Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo version"},{"location":"cli/argo_version/#argo-version","text":"print version information argo version [flags]","title":"argo version"},{"location":"cli/argo_version/#options","text":"-h, --help help for version --short print just the version number","title":"Options"},{"location":"cli/argo_version/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_version/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_wait/","text":"argo wait \u00b6 waits for workflows to complete argo wait [WORKFLOW...] [flags] Examples \u00b6 # Wait on a workflow: argo wait my-wf # Wait on the latest workflow: argo wait @latest Options \u00b6 -h, --help help for wait --ignore-not-found Ignore the wait if the workflow is not found Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo wait"},{"location":"cli/argo_wait/#argo-wait","text":"waits for workflows to complete argo wait [WORKFLOW...] [flags]","title":"argo wait"},{"location":"cli/argo_wait/#examples","text":"# Wait on a workflow: argo wait my-wf # Wait on the latest workflow: argo wait @latest","title":"Examples"},{"location":"cli/argo_wait/#options","text":"-h, --help help for wait --ignore-not-found Ignore the wait if the workflow is not found","title":"Options"},{"location":"cli/argo_wait/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_wait/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"cli/argo_watch/","text":"argo watch \u00b6 watch a workflow until it completes argo watch WORKFLOW [flags] Examples \u00b6 # Watch a workflow: argo watch my-wf # Watch the latest workflow: argo watch @latest Options \u00b6 -h, --help help for watch --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error) Options inherited from parent commands \u00b6 --argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug SEE ALSO \u00b6 argo - argo is the command line interface to Argo","title":"argo watch"},{"location":"cli/argo_watch/#argo-watch","text":"watch a workflow until it completes argo watch WORKFLOW [flags]","title":"argo watch"},{"location":"cli/argo_watch/#examples","text":"# Watch a workflow: argo watch my-wf # Watch the latest workflow: argo watch @latest","title":"Examples"},{"location":"cli/argo_watch/#options","text":"-h, --help help for watch --node-field-selector string selector of node to display, eg: --node-field-selector phase=abc --status string Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error)","title":"Options"},{"location":"cli/argo_watch/#options-inherited-from-parent-commands","text":"--argo-base-href string An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable. --argo-http1 If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable. -s, --argo-server host:port API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable. --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --gloglevel int Set the glog logging level -H, --header strings Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true. --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure -k, --insecure-skip-verify If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable. --instanceid string submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable. --kubeconfig string Path to a kube config. Only required if out-of-cluster --loglevel string Set the logging level. One of: debug|info|warn|error (default \"info\") -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -e, --secure Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true) --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server -v, --verbose Enabled verbose logging, i.e. --loglevel debug","title":"Options inherited from parent commands"},{"location":"cli/argo_watch/#see-also","text":"argo - argo is the command line interface to Argo","title":"SEE ALSO"},{"location":"proposals/artifact-gc-proposal/","text":"Proposal for Artifact Garbage Collection \u00b6 Introduction \u00b6 The motivation for this is to enable users to automatically have certain Artifacts specified to be automatically garbage collected. Artifacts can be specified for Garbage Collection at different stages: OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never Proposal Specifics \u00b6 Workflow Spec changes \u00b6 WorkflowSpec has an ArtifactGC structure, which consists of an ArtifactGCStrategy , as well as the optional designation of a ServiceAccount and Pod metadata (labels and annotations) to be used by the Pod doing the deletion. The ArtifactGCStrategy can be set to OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never Artifact has an ArtifactGC section which can be used to override the Workflow level. Workflow Status changes \u00b6 Artifact has a boolean Deleted flag WorkflowStatus.Conditions can be set to ArtifactGCError WorkflowStatus can include a new field ArtGCStatus which holds additional information to keep track of the state of Artifact Garbage Collection. How it will work \u00b6 For each ArtifactGCStrategy the Controller will execute one Pod that runs in the user's namespace and deletes all artifacts pertaining to that strategy. Since OnWorkflowSuccess happens at the same time as OnWorkflowCompletion and OnWorkflowFailure also happens at the same time as OnWorkflowCompletion , we can consider consolidating these GC Strategies together. We will have a new CRD type called ArtifactGCTask and use one or more of them to specify the Artifacts which the GC Pod will read and then write Status to (note individual artifacts have individual statuses). The Controller will read the Status and reflect that in the Workflow Status. The Controller will deem the ArtifactGCTasks ready to read once the Pod has completed (in success or failure). Once the GC Pod has completed and the Workflow status has been persisted, assuming the Pod completed with Success, the Controller can delete the ArtifactGCTasks , which will cause the GC Pod to also get deleted as it will be \"owned\" by the ArtifactGCTasks . The Workflow will have a Finalizer on it to prevent it from being deleted until Artifact GC has occurred. Once all deletions for all GC Strategies have occurred, the Controller will remove the Finalizer. Failures \u00b6 If a deletion fails, the Pod will retry a few times through exponential back off. Note: it will not be considered a failure if the key does not exist - the principal of idempotence will allow this (i.e. if a Pod were to get evicted and then re-run it should be okay if some artifacts were previously deleted). Once it retries a few times, if it didn't succeed, it will end in a \"Failed\" state. The user will manually need to delete the ArtifactGCTasks (which will delete the GC Pod), and remove the Finalizer on the Workflow. The Failure will be reflected in both the Workflow Conditions as well as as a Kubernetes Event (and the Artifacts that failed will have \"Deleted\"=false). Alternatives Considered \u00b6 For reference, these slides were presented to the Argo Contributor meeting on 7/12/22 which go through some of the alternative options that were weighed. These alternatives are explained below: One Pod Per Artifact \u00b6 The POC that was done, which uses just one Pod to delete each Artifact, was considered as an alternative for MVP (Option 1 from the slides). This option has these benefits: simpler in that the Pod doesn't require any additional Object to report status (e.g. ArtifactGCTask ) because it simply succeeds or fails based on its exit code (whereas in Option 2 the Pod needs to report individual failure statuses for each artifact) could have a very minimal Service Account which provides access to just that one artifact's location and these drawbacks: deletion is slower when performed by multiple Pods a Workflow with thousands of artifacts causes thousands of Pods to get executed, which could overwhelm kube-scheduler and kube-apiserver. if we delay the Artifact GC Pods by giving them a lower priority than the Workflow Pods, users will not get their artifacts deleted when they expect and may log bugs Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing whether to use a separate Pod for every artifact or not, we decided not to, to achieve faster garbage collection and reduced load on K8S, accepting that we will require a new CRD type.\" Service Account/IAM roles \u00b6 We considered some alternatives for how to specify Service Account and/or Annotations, which are applied to give the GC Pod access (slide 12). We will have them specify this information in a new ArtifactGC section of the spec that lives on the Workflow level but can be overridden on the Artifact level (option 3 from slide). Another option considered was to just allow specification on the Workflow level (option 2 from slide) so as to reduce the complexity of the code and reduce the potential number of Pods running, but Option 3 was selected in the end to maximize flexibility. Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing the question of how users should specify Service Account and annotations, we decided to give them the option to specify them on the Workflow level and/or override them on the Artifact level, to maximize flexibility for user needs, accepting that the code will be more complicated, and sometimes there will be many Pods running.\" MVP vs post-MVP \u00b6 We will start with just S3. We can also make other determinations if it makes sense to postpone some parts for after MVP. Workflow Spec Validation \u00b6 We can reject the Workflow during validation if ArtifactGC is configured along with a non-supported storage engine (for now probably anything besides S3). Documentation \u00b6 Need to clarify certain things in our documentation: Users need to know that if they don't name their artifacts with unique keys, they risk the same key being deleted by one Workflow and created by another at the same time. One recommendation is to parametrize the key, e.g. {{workflow.uid}}/hello.txt . Requirement to specify Service Account or Annotation for ArtifactGC specifically if they are needed (we won't fall back to default Workflow SA/annotations). Also, the Service Account needs to either be bound to the \"agent\" role or otherwise allow the same access to ArtifactGCTasks .","title":"Proposal for Artifact Garbage Collection"},{"location":"proposals/artifact-gc-proposal/#proposal-for-artifact-garbage-collection","text":"","title":"Proposal for Artifact Garbage Collection"},{"location":"proposals/artifact-gc-proposal/#introduction","text":"The motivation for this is to enable users to automatically have certain Artifacts specified to be automatically garbage collected. Artifacts can be specified for Garbage Collection at different stages: OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never","title":"Introduction"},{"location":"proposals/artifact-gc-proposal/#proposal-specifics","text":"","title":"Proposal Specifics"},{"location":"proposals/artifact-gc-proposal/#workflow-spec-changes","text":"WorkflowSpec has an ArtifactGC structure, which consists of an ArtifactGCStrategy , as well as the optional designation of a ServiceAccount and Pod metadata (labels and annotations) to be used by the Pod doing the deletion. The ArtifactGCStrategy can be set to OnWorkflowCompletion , OnWorkflowDeletion , OnWorkflowSuccess , OnWorkflowFailure , or Never Artifact has an ArtifactGC section which can be used to override the Workflow level.","title":"Workflow Spec changes"},{"location":"proposals/artifact-gc-proposal/#workflow-status-changes","text":"Artifact has a boolean Deleted flag WorkflowStatus.Conditions can be set to ArtifactGCError WorkflowStatus can include a new field ArtGCStatus which holds additional information to keep track of the state of Artifact Garbage Collection.","title":"Workflow Status changes"},{"location":"proposals/artifact-gc-proposal/#how-it-will-work","text":"For each ArtifactGCStrategy the Controller will execute one Pod that runs in the user's namespace and deletes all artifacts pertaining to that strategy. Since OnWorkflowSuccess happens at the same time as OnWorkflowCompletion and OnWorkflowFailure also happens at the same time as OnWorkflowCompletion , we can consider consolidating these GC Strategies together. We will have a new CRD type called ArtifactGCTask and use one or more of them to specify the Artifacts which the GC Pod will read and then write Status to (note individual artifacts have individual statuses). The Controller will read the Status and reflect that in the Workflow Status. The Controller will deem the ArtifactGCTasks ready to read once the Pod has completed (in success or failure). Once the GC Pod has completed and the Workflow status has been persisted, assuming the Pod completed with Success, the Controller can delete the ArtifactGCTasks , which will cause the GC Pod to also get deleted as it will be \"owned\" by the ArtifactGCTasks . The Workflow will have a Finalizer on it to prevent it from being deleted until Artifact GC has occurred. Once all deletions for all GC Strategies have occurred, the Controller will remove the Finalizer.","title":"How it will work"},{"location":"proposals/artifact-gc-proposal/#failures","text":"If a deletion fails, the Pod will retry a few times through exponential back off. Note: it will not be considered a failure if the key does not exist - the principal of idempotence will allow this (i.e. if a Pod were to get evicted and then re-run it should be okay if some artifacts were previously deleted). Once it retries a few times, if it didn't succeed, it will end in a \"Failed\" state. The user will manually need to delete the ArtifactGCTasks (which will delete the GC Pod), and remove the Finalizer on the Workflow. The Failure will be reflected in both the Workflow Conditions as well as as a Kubernetes Event (and the Artifacts that failed will have \"Deleted\"=false).","title":"Failures"},{"location":"proposals/artifact-gc-proposal/#alternatives-considered","text":"For reference, these slides were presented to the Argo Contributor meeting on 7/12/22 which go through some of the alternative options that were weighed. These alternatives are explained below:","title":"Alternatives Considered"},{"location":"proposals/artifact-gc-proposal/#one-pod-per-artifact","text":"The POC that was done, which uses just one Pod to delete each Artifact, was considered as an alternative for MVP (Option 1 from the slides). This option has these benefits: simpler in that the Pod doesn't require any additional Object to report status (e.g. ArtifactGCTask ) because it simply succeeds or fails based on its exit code (whereas in Option 2 the Pod needs to report individual failure statuses for each artifact) could have a very minimal Service Account which provides access to just that one artifact's location and these drawbacks: deletion is slower when performed by multiple Pods a Workflow with thousands of artifacts causes thousands of Pods to get executed, which could overwhelm kube-scheduler and kube-apiserver. if we delay the Artifact GC Pods by giving them a lower priority than the Workflow Pods, users will not get their artifacts deleted when they expect and may log bugs Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing whether to use a separate Pod for every artifact or not, we decided not to, to achieve faster garbage collection and reduced load on K8S, accepting that we will require a new CRD type.\"","title":"One Pod Per Artifact"},{"location":"proposals/artifact-gc-proposal/#service-accountiam-roles","text":"We considered some alternatives for how to specify Service Account and/or Annotations, which are applied to give the GC Pod access (slide 12). We will have them specify this information in a new ArtifactGC section of the spec that lives on the Workflow level but can be overridden on the Artifact level (option 3 from slide). Another option considered was to just allow specification on the Workflow level (option 2 from slide) so as to reduce the complexity of the code and reduce the potential number of Pods running, but Option 3 was selected in the end to maximize flexibility. Summarizing ADR statement: \"In the context of Artifact Garbage Collection, facing the question of how users should specify Service Account and annotations, we decided to give them the option to specify them on the Workflow level and/or override them on the Artifact level, to maximize flexibility for user needs, accepting that the code will be more complicated, and sometimes there will be many Pods running.\"","title":"Service Account/IAM roles"},{"location":"proposals/artifact-gc-proposal/#mvp-vs-post-mvp","text":"We will start with just S3. We can also make other determinations if it makes sense to postpone some parts for after MVP.","title":"MVP vs post-MVP"},{"location":"proposals/artifact-gc-proposal/#workflow-spec-validation","text":"We can reject the Workflow during validation if ArtifactGC is configured along with a non-supported storage engine (for now probably anything besides S3).","title":"Workflow Spec Validation"},{"location":"proposals/artifact-gc-proposal/#documentation","text":"Need to clarify certain things in our documentation: Users need to know that if they don't name their artifacts with unique keys, they risk the same key being deleted by one Workflow and created by another at the same time. One recommendation is to parametrize the key, e.g. {{workflow.uid}}/hello.txt . Requirement to specify Service Account or Annotation for ArtifactGC specifically if they are needed (we won't fall back to default Workflow SA/annotations). Also, the Service Account needs to either be bound to the \"agent\" role or otherwise allow the same access to ArtifactGCTasks .","title":"Documentation"},{"location":"proposals/cron-wf-improvement-proposal/","text":"Proposal for Cron Workflows improvements \u00b6 Introduction \u00b6 Currently, CronWorkflows are a great resource if we want to run recurring tasks to infinity. However, it is missing the ability to customize it, for example define how many times a workflow should run or how to handle multiple failures. I believe argo workflows would benefit of having more configuration options for cron workflows, to allow to change its behavior based on the result of its child\u2019s success or failures. Below I present my thoughts on how we could improve them, but also some questions and concerns on how to properly do it. Proposal \u00b6 This proposal discusses the viability of adding 2 more fields into the cron workflow configuration: RunStrategy : maxSuccess : maxFailures : maxSuccess - defines how many child workflows must have success before suspending the workflow schedule maxFailures - defines how many child workflows must fail before suspending the workflow scheduling. This may contain Failed workflows, Errored workflows or spec errors. For example, if we want to run a workflow just once, we could just set: RunStrategy : maxSuccess : 1 This configuration will make sure the controller will keep scheduling workflows until one of them finishes with success. As another example, if we want to stop scheduling workflows when they keep failing, we could configure the CronWorkflow with: RunStrategy : maxFailures : 2 This config will stop scheduling workflows if fails twice. Total vs consecutive \u00b6 One aspect that needs to be discussed is whether these configurations apply to the entire life of a cron Workflow or just in consecutive schedules. For example, if we configure a workflow to stop scheduling after 2 failures, I think it makes sense to have this applied when it fails twice consecutively. Otherwise, we can have 2 outages in different periods which will suspend the workflow. On the other hand, when configuring a workflow to run twice with success, it would make more sense to have it execute with success regardless of whether it is a consecutive success or not. If we have an outage after the first workflow succeeds, which translates into failed workflows, it should need to execute with success only once. So I think it would make sense to have: maxFailures - maximum number of consecutive failures before stopping the scheduling of a workflow maxSuccess - maximum number of workflows with success. How to store state \u00b6 Since we need to control how many child workflows had success/failure we must store state. With this some questions arise: Should we just store it through the lifetime of the controller or should we store it to a database? Probably only makes sense if we can backup the state somewhere (like a BD). However, I don't have enough knowledge about workflow's architecture to tell how good of an idea this is. If a CronWorkflow gets re-applied, does it maintain or reset the number of success/failures? I guess it should reset since a configuration change should be seen as a new start. How to stop the workflow \u00b6 Once the configured number of failures or successes is reached, it is necessary to stop the workflow scheduling. I believe we have 3 options: Delete the workflow: In my opinion, this is the worst option and goes against gitops principles. Suspend it (set suspend=true): the workflow spec is changed to have the workflow suspended. I may be wrong but this conflicts with gitops as well. Stop scheduling it: The workflow spec is the same. The controller needs to check if the max number of runs was already attained and skip scheduling if it did. Option 3 seems to be the only possibility. After reaching the max configured executions, the cron workflow would exist forever but never scheduled. Maybe we could add a new status field, like Inactive and have something the UI to show it? How to handle suspended workflows \u00b6 One possible case that comes to mind is a long outage where all workflows are failing. For example, imagine a workflow that needs to download a file from some storage and for some reason that storage is down. Workflows will keep getting scheduled but they are going to fail. If they fail the number of configured maxFailures , the workflows gets stopped forever. Once the storage is back up, how can the user enable the workflow again? Manually re-create the workflow: could be an issue if the user has multiple cron workflows Instead of stopping the workflow scheduling, introduce a back-off period as suggested by #7291 . Or maybe allow both configurations. I believe option 2 would allow the user to select if they want to stop scheduling or not. If they do, when cron workflows are wrongfully halted, they will need to manually start them again. If they don't, Argo will only introduce a back-off period between schedules to avoid rescheduling workflows that are just going to fail. Spec could look something like: RunStrategy : maxSuccess : maxFailures : value : # this would be optional back-off : enabled : true factor : 2 With this configuration the user could configure 3 behaviors: set value if they wanted to stop scheduling a workflow after a maximum number of consecutive failures. set value and back-off if they wanted to stop scheduling a workflow after a maximum number of consecutive failures but with a back-off period between each failure set back-off if they want a back-off period between each failure but they never want to stop the workflow scheduling. Wrap up \u00b6 I believe this feature would enhance the cron workflows to allow more specific use cases that are commonly requested by the community, such as running a workflow only once. This proposal raises some concerns on how to properly implement it and I would like to know the maintainers/contributors opinion on these 4 topics, but also some other issues that I couldn't think of. Resources \u00b6 This discussion was prompted by #10620 A first approach to this problem was discussed in 5659 A draft PR to implement the first approach #5662","title":"Proposal for Cron Workflows improvements"},{"location":"proposals/cron-wf-improvement-proposal/#proposal-for-cron-workflows-improvements","text":"","title":"Proposal for Cron Workflows improvements"},{"location":"proposals/cron-wf-improvement-proposal/#introduction","text":"Currently, CronWorkflows are a great resource if we want to run recurring tasks to infinity. However, it is missing the ability to customize it, for example define how many times a workflow should run or how to handle multiple failures. I believe argo workflows would benefit of having more configuration options for cron workflows, to allow to change its behavior based on the result of its child\u2019s success or failures. Below I present my thoughts on how we could improve them, but also some questions and concerns on how to properly do it.","title":"Introduction"},{"location":"proposals/cron-wf-improvement-proposal/#proposal","text":"This proposal discusses the viability of adding 2 more fields into the cron workflow configuration: RunStrategy : maxSuccess : maxFailures : maxSuccess - defines how many child workflows must have success before suspending the workflow schedule maxFailures - defines how many child workflows must fail before suspending the workflow scheduling. This may contain Failed workflows, Errored workflows or spec errors. For example, if we want to run a workflow just once, we could just set: RunStrategy : maxSuccess : 1 This configuration will make sure the controller will keep scheduling workflows until one of them finishes with success. As another example, if we want to stop scheduling workflows when they keep failing, we could configure the CronWorkflow with: RunStrategy : maxFailures : 2 This config will stop scheduling workflows if fails twice.","title":"Proposal"},{"location":"proposals/cron-wf-improvement-proposal/#total-vs-consecutive","text":"One aspect that needs to be discussed is whether these configurations apply to the entire life of a cron Workflow or just in consecutive schedules. For example, if we configure a workflow to stop scheduling after 2 failures, I think it makes sense to have this applied when it fails twice consecutively. Otherwise, we can have 2 outages in different periods which will suspend the workflow. On the other hand, when configuring a workflow to run twice with success, it would make more sense to have it execute with success regardless of whether it is a consecutive success or not. If we have an outage after the first workflow succeeds, which translates into failed workflows, it should need to execute with success only once. So I think it would make sense to have: maxFailures - maximum number of consecutive failures before stopping the scheduling of a workflow maxSuccess - maximum number of workflows with success.","title":"Total vs consecutive"},{"location":"proposals/cron-wf-improvement-proposal/#how-to-store-state","text":"Since we need to control how many child workflows had success/failure we must store state. With this some questions arise: Should we just store it through the lifetime of the controller or should we store it to a database? Probably only makes sense if we can backup the state somewhere (like a BD). However, I don't have enough knowledge about workflow's architecture to tell how good of an idea this is. If a CronWorkflow gets re-applied, does it maintain or reset the number of success/failures? I guess it should reset since a configuration change should be seen as a new start.","title":"How to store state"},{"location":"proposals/cron-wf-improvement-proposal/#how-to-stop-the-workflow","text":"Once the configured number of failures or successes is reached, it is necessary to stop the workflow scheduling. I believe we have 3 options: Delete the workflow: In my opinion, this is the worst option and goes against gitops principles. Suspend it (set suspend=true): the workflow spec is changed to have the workflow suspended. I may be wrong but this conflicts with gitops as well. Stop scheduling it: The workflow spec is the same. The controller needs to check if the max number of runs was already attained and skip scheduling if it did. Option 3 seems to be the only possibility. After reaching the max configured executions, the cron workflow would exist forever but never scheduled. Maybe we could add a new status field, like Inactive and have something the UI to show it?","title":"How to stop the workflow"},{"location":"proposals/cron-wf-improvement-proposal/#how-to-handle-suspended-workflows","text":"One possible case that comes to mind is a long outage where all workflows are failing. For example, imagine a workflow that needs to download a file from some storage and for some reason that storage is down. Workflows will keep getting scheduled but they are going to fail. If they fail the number of configured maxFailures , the workflows gets stopped forever. Once the storage is back up, how can the user enable the workflow again? Manually re-create the workflow: could be an issue if the user has multiple cron workflows Instead of stopping the workflow scheduling, introduce a back-off period as suggested by #7291 . Or maybe allow both configurations. I believe option 2 would allow the user to select if they want to stop scheduling or not. If they do, when cron workflows are wrongfully halted, they will need to manually start them again. If they don't, Argo will only introduce a back-off period between schedules to avoid rescheduling workflows that are just going to fail. Spec could look something like: RunStrategy : maxSuccess : maxFailures : value : # this would be optional back-off : enabled : true factor : 2 With this configuration the user could configure 3 behaviors: set value if they wanted to stop scheduling a workflow after a maximum number of consecutive failures. set value and back-off if they wanted to stop scheduling a workflow after a maximum number of consecutive failures but with a back-off period between each failure set back-off if they want a back-off period between each failure but they never want to stop the workflow scheduling.","title":"How to handle suspended workflows"},{"location":"proposals/cron-wf-improvement-proposal/#wrap-up","text":"I believe this feature would enhance the cron workflows to allow more specific use cases that are commonly requested by the community, such as running a workflow only once. This proposal raises some concerns on how to properly implement it and I would like to know the maintainers/contributors opinion on these 4 topics, but also some other issues that I couldn't think of.","title":"Wrap up"},{"location":"proposals/cron-wf-improvement-proposal/#resources","text":"This discussion was prompted by #10620 A first approach to this problem was discussed in 5659 A draft PR to implement the first approach #5662","title":"Resources"},{"location":"proposals/makefile-improvement-proposal/","text":"Proposal for Makefile improvements \u00b6 Introduction \u00b6 The motivation for this proposal is to enable developers working on Argo Workflows to use build tools in a more reproducible way. Currently the Makefile is unfortunately too opinionated and as a result is often a blocker when first setting up Argo Workflows locally. I believe we should shrink the responsibilities of the Makefile and where possible outsource areas of responsibility to more specialized technology, such as Devenv/Nix in the case of dependency management. Proposal Specifics \u00b6 In order to better address reproducibility, it is better to split up the duties the Makefile currently performs into various sub components, that can be assembled in more appropriate technology. One important aspect here is to completely shift the responsibility of dependency management away from the Makefile and into technology such as Nix or Devenv. This proposal will also enable quicker access to a development build of Argo Workflows to developers, reducing the costs of on-boarding and barrier to entry. Devenv \u00b6 Benefits of Devenv \u00b6 Reproducible build environment Ability to run processes Disadvantages of Devenv \u00b6 Huge learning curve to tap into Nix functionality Less documentation Nix \u00b6 Benefits of Nix \u00b6 Reproducible build environment Direct raw control of various Nix related functionality instead of using Devenv More documentation Disadvantages of Nix \u00b6 Huge learning curve Recommendation \u00b6 I suggest that we use Nix over Devenv. I believe that our build environment is unique enough that we will be tapping into Nix anyway, it probably makes sense to directly use Nix in that case. Proposal \u00b6 In order to maximize the benefit we receive from using something like Nix, I suggest that we initially start off with a modest change to the Makefile. The first proposal would be to remove out all dependency management code and replace this functionality with Nix, where it is trivially possible. This may not be possible for some go lang related binaries we use, we will retain the Makefile functionality in those cases, at least for a while. Eventually we will migrate more and more of this responsibility away from the Makefile. Following Nix being responsible for all dependency management, we could start to consider moving more of our build system itself into Nix, perhaps it is easiest to start off with UI build as it is relatively painless. However, do note that this is not a requirement, I do not see a problem with the Makefile and the Nix file co-existing, it is more about finding a good balance between the reproducibility we desire and the effort we put into obtaining said reproducibility. An example for a replacement could be this dependency for example, note that we do not state any version here, replacing such installations with Nix based installations will ensure that we can ensure that if a build works on a certain developer's machine, it should also work on every other machine as well. What will Nix get us? \u00b6 As mentioned previously Nix gets us closer to reproducible build environments. It should ease significantly the on-boarding process of developers onto the project. There have been several developers who wanted to work on Argo Workflows but found the Makefile to be a barrier, it is likely that there are more developers on this boat. With a reproducible build environment, we hope that everyone who would like to contribute to the project is able to do so easily. It should also save time for engineers on-boarding onto the project, especially if they are using a system that is not Ubuntu or OSX. What will Nix cost us? \u00b6 If we proceed further with Nix, it will require some amount of people working on Argo Workflows to learn it, this is not a trivial task by any means. It will increase the barrier when it comes to changes that are build related, however, this isn't necessarily bad as build related changes should be far less frequent, the friction we will endure here is likely manageable. How will developers use nix? \u00b6 In the case that both Nix and the Makefile co-exist, we could use nix inside the Makefile itself. The Makefile calls into Nix to setup a developer environment with all dependencies, it will then continue the rest of the Makefile execution as normal. Following a complete or near complete migration to Nix, we can use nix-build for more of our tasks. An example of a C++ project environment is provided here Resources \u00b6 Nix Manual - Go Devenv How to Learn Nix","title":"Proposal for Makefile improvements"},{"location":"proposals/makefile-improvement-proposal/#proposal-for-makefile-improvements","text":"","title":"Proposal for Makefile improvements"},{"location":"proposals/makefile-improvement-proposal/#introduction","text":"The motivation for this proposal is to enable developers working on Argo Workflows to use build tools in a more reproducible way. Currently the Makefile is unfortunately too opinionated and as a result is often a blocker when first setting up Argo Workflows locally. I believe we should shrink the responsibilities of the Makefile and where possible outsource areas of responsibility to more specialized technology, such as Devenv/Nix in the case of dependency management.","title":"Introduction"},{"location":"proposals/makefile-improvement-proposal/#proposal-specifics","text":"In order to better address reproducibility, it is better to split up the duties the Makefile currently performs into various sub components, that can be assembled in more appropriate technology. One important aspect here is to completely shift the responsibility of dependency management away from the Makefile and into technology such as Nix or Devenv. This proposal will also enable quicker access to a development build of Argo Workflows to developers, reducing the costs of on-boarding and barrier to entry.","title":"Proposal Specifics"},{"location":"proposals/makefile-improvement-proposal/#devenv","text":"","title":"Devenv"},{"location":"proposals/makefile-improvement-proposal/#benefits-of-devenv","text":"Reproducible build environment Ability to run processes","title":"Benefits of Devenv"},{"location":"proposals/makefile-improvement-proposal/#disadvantages-of-devenv","text":"Huge learning curve to tap into Nix functionality Less documentation","title":"Disadvantages of Devenv"},{"location":"proposals/makefile-improvement-proposal/#nix","text":"","title":"Nix"},{"location":"proposals/makefile-improvement-proposal/#benefits-of-nix","text":"Reproducible build environment Direct raw control of various Nix related functionality instead of using Devenv More documentation","title":"Benefits of Nix"},{"location":"proposals/makefile-improvement-proposal/#disadvantages-of-nix","text":"Huge learning curve","title":"Disadvantages of Nix"},{"location":"proposals/makefile-improvement-proposal/#recommendation","text":"I suggest that we use Nix over Devenv. I believe that our build environment is unique enough that we will be tapping into Nix anyway, it probably makes sense to directly use Nix in that case.","title":"Recommendation"},{"location":"proposals/makefile-improvement-proposal/#proposal","text":"In order to maximize the benefit we receive from using something like Nix, I suggest that we initially start off with a modest change to the Makefile. The first proposal would be to remove out all dependency management code and replace this functionality with Nix, where it is trivially possible. This may not be possible for some go lang related binaries we use, we will retain the Makefile functionality in those cases, at least for a while. Eventually we will migrate more and more of this responsibility away from the Makefile. Following Nix being responsible for all dependency management, we could start to consider moving more of our build system itself into Nix, perhaps it is easiest to start off with UI build as it is relatively painless. However, do note that this is not a requirement, I do not see a problem with the Makefile and the Nix file co-existing, it is more about finding a good balance between the reproducibility we desire and the effort we put into obtaining said reproducibility. An example for a replacement could be this dependency for example, note that we do not state any version here, replacing such installations with Nix based installations will ensure that we can ensure that if a build works on a certain developer's machine, it should also work on every other machine as well.","title":"Proposal"},{"location":"proposals/makefile-improvement-proposal/#what-will-nix-get-us","text":"As mentioned previously Nix gets us closer to reproducible build environments. It should ease significantly the on-boarding process of developers onto the project. There have been several developers who wanted to work on Argo Workflows but found the Makefile to be a barrier, it is likely that there are more developers on this boat. With a reproducible build environment, we hope that everyone who would like to contribute to the project is able to do so easily. It should also save time for engineers on-boarding onto the project, especially if they are using a system that is not Ubuntu or OSX.","title":"What will Nix get us?"},{"location":"proposals/makefile-improvement-proposal/#what-will-nix-cost-us","text":"If we proceed further with Nix, it will require some amount of people working on Argo Workflows to learn it, this is not a trivial task by any means. It will increase the barrier when it comes to changes that are build related, however, this isn't necessarily bad as build related changes should be far less frequent, the friction we will endure here is likely manageable.","title":"What will Nix cost us?"},{"location":"proposals/makefile-improvement-proposal/#how-will-developers-use-nix","text":"In the case that both Nix and the Makefile co-exist, we could use nix inside the Makefile itself. The Makefile calls into Nix to setup a developer environment with all dependencies, it will then continue the rest of the Makefile execution as normal. Following a complete or near complete migration to Nix, we can use nix-build for more of our tasks. An example of a C++ project environment is provided here","title":"How will developers use nix?"},{"location":"proposals/makefile-improvement-proposal/#resources","text":"Nix Manual - Go Devenv How to Learn Nix","title":"Resources"},{"location":"use-cases/ci-cd/","text":"CI/CD \u00b6 Docs \u00b6 Quick start and training Learn about webhooks for triggering pipelines. Head to the Argo CD docs. Videos \u00b6 Distributed Load Testing Using Argo Workflows - Sumit Nagal (Intuit) CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows How LitmusChaos uses Argo Workflows Tekton vs. Argo Workflows - Kubernetes-Native CI/CD Pipelines","title":"CI/CD"},{"location":"use-cases/ci-cd/#cicd","text":"","title":"CI/CD"},{"location":"use-cases/ci-cd/#docs","text":"Quick start and training Learn about webhooks for triggering pipelines. Head to the Argo CD docs.","title":"Docs"},{"location":"use-cases/ci-cd/#videos","text":"Distributed Load Testing Using Argo Workflows - Sumit Nagal (Intuit) CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows How LitmusChaos uses Argo Workflows Tekton vs. Argo Workflows - Kubernetes-Native CI/CD Pipelines","title":"Videos"},{"location":"use-cases/data-processing/","text":"Data Processing \u00b6 Docs \u00b6 Quick start and training Videos \u00b6 Running a Data Replication Pipeline on Kubernetes with Argo and Singer.io Books \u00b6 Distributed Machine Learning Patterns (see Chapter 2 on data processing/ingestion patterns)","title":"Data Processing"},{"location":"use-cases/data-processing/#data-processing","text":"","title":"Data Processing"},{"location":"use-cases/data-processing/#docs","text":"Quick start and training","title":"Docs"},{"location":"use-cases/data-processing/#videos","text":"Running a Data Replication Pipeline on Kubernetes with Argo and Singer.io","title":"Videos"},{"location":"use-cases/data-processing/#books","text":"Distributed Machine Learning Patterns (see Chapter 2 on data processing/ingestion patterns)","title":"Books"},{"location":"use-cases/infrastructure-automation/","text":"Infrastructure Automation \u00b6 Docs \u00b6 Quick start and training Head to the Argo Events docs. Videos \u00b6 Infrastructure Automation with Argo at InsideBoard - Alexandre Le Mao (Head of infrastructure / Lead DevOps, InsideBoard) Argo and KNative - David Breitgand (IBM) - showing 5G infra automation use case How New Relic Uses Argo Workflows - Fischer Jemison, Jared Welch (New Relic) Building Kubernetes using Kubernetes - Tomas Valasek (SAP Concur)","title":"Infrastructure Automation"},{"location":"use-cases/infrastructure-automation/#infrastructure-automation","text":"","title":"Infrastructure Automation"},{"location":"use-cases/infrastructure-automation/#docs","text":"Quick start and training Head to the Argo Events docs.","title":"Docs"},{"location":"use-cases/infrastructure-automation/#videos","text":"Infrastructure Automation with Argo at InsideBoard - Alexandre Le Mao (Head of infrastructure / Lead DevOps, InsideBoard) Argo and KNative - David Breitgand (IBM) - showing 5G infra automation use case How New Relic Uses Argo Workflows - Fischer Jemison, Jared Welch (New Relic) Building Kubernetes using Kubernetes - Tomas Valasek (SAP Concur)","title":"Videos"},{"location":"use-cases/machine-learning/","text":"Machine Learning \u00b6 Docs \u00b6 Quick start and training Try out the updated Python and Java SDKs . Authoring and Submitting Argo Workflows using Python Head to the Kubeflow docs . Videos \u00b6 Automating Research Workflows at BlackRock Bridging into Python Ecosystem with Cloud-Native Distributed Machine Learning Pipelines Building Medical Grade AI with Argo Workflows CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows Dynamic, Event-Driven Machine Learning Pipelines with Argo Workflows Machine Learning as Code: GitOps for ML with Kubeflow and Argo CD Machine Learning with Argo and Ploomber Making Complex R Forecast Applications Into Production Using Argo Workflows MLOps at TripAdvisor: ML Models CI/CD Automation with Argo - Ang Gao (Principal Software Engineer, TripAdvisor) Towards Cloud-Native Distributed Machine Learning Pipelines at Scale Books \u00b6 Distributed Machine Learning Patterns","title":"Machine Learning"},{"location":"use-cases/machine-learning/#machine-learning","text":"","title":"Machine Learning"},{"location":"use-cases/machine-learning/#docs","text":"Quick start and training Try out the updated Python and Java SDKs . Authoring and Submitting Argo Workflows using Python Head to the Kubeflow docs .","title":"Docs"},{"location":"use-cases/machine-learning/#videos","text":"Automating Research Workflows at BlackRock Bridging into Python Ecosystem with Cloud-Native Distributed Machine Learning Pipelines Building Medical Grade AI with Argo Workflows CI/CD for Machine Learning at MLB using Argo Workflows - Eric Meadows Dynamic, Event-Driven Machine Learning Pipelines with Argo Workflows Machine Learning as Code: GitOps for ML with Kubeflow and Argo CD Machine Learning with Argo and Ploomber Making Complex R Forecast Applications Into Production Using Argo Workflows MLOps at TripAdvisor: ML Models CI/CD Automation with Argo - Ang Gao (Principal Software Engineer, TripAdvisor) Towards Cloud-Native Distributed Machine Learning Pipelines at Scale","title":"Videos"},{"location":"use-cases/machine-learning/#books","text":"Distributed Machine Learning Patterns","title":"Books"},{"location":"use-cases/other/","text":"Other \u00b6 Argo can also be used for many other use-cases. Docs \u00b6 Quick start and training A Curated List of Awesome Projects and Resources Related to Argo","title":"Other"},{"location":"use-cases/other/#other","text":"Argo can also be used for many other use-cases.","title":"Other"},{"location":"use-cases/other/#docs","text":"Quick start and training A Curated List of Awesome Projects and Resources Related to Argo","title":"Docs"},{"location":"use-cases/stream-processing/","text":"Stream Processing \u00b6 Head to the ArgoLabs Dataflow docs.","title":"Stream Processing"},{"location":"use-cases/stream-processing/#stream-processing","text":"Head to the ArgoLabs Dataflow docs.","title":"Stream Processing"},{"location":"use-cases/webhdfs/","text":"webHDFS via HTTP artifacts \u00b6 webHDFS is a protocol allowing to access Hadoop or similar data storage via a unified REST API. Input Artifacts \u00b6 You can use HTTP artifacts to connect to webHDFS, where the URL will be the webHDFS endpoint including the file path and any query parameters. Suppose your webHDFS endpoint is available under https://mywebhdfsprovider.com/webhdfs/v1/ and you have a file my-art.txt located in a data folder, which you want to use as an input artifact. To construct the URL, you append the file path to the base webHDFS endpoint and set the OPEN operation via query parameter. The result is: https://mywebhdfsprovider.com/webhdfs/v1/data/my-art.txt?op=OPEN . See the below Workflow which will download the specified webHDFS artifact into the specified path : spec : # ... inputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/file.txt?op=OPEN\" Additional fields can be set for HTTP artifacts (for example, headers). See usage in the full webHDFS example . Output Artifacts \u00b6 To declare a webHDFS output artifact, instead use the CREATE operation and set the file path to your desired location. In the below example, the artifact will be stored at outputs/newfile.txt . You can overwrite existing files with overwrite=true . spec : # ... outputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/outputs/newfile.txt?op=CREATE&overwrite=true\" Authentication \u00b6 The above examples show minimal use cases without authentication. However, in a real-world scenario, you may want to use authentication. The authentication mechanism is limited to those supported by HTTP artifacts: HTTP Basic Auth OAuth2 Client Certificates Examples for the latter two mechanisms can be found in the full webHDFS example . Provider dependent While your webHDFS provider may support the above mechanisms, Hadoop itself only supports authentication via Kerberos SPNEGO and Hadoop delegation token. HTTP artifacts do not currently support SPNEGO, but delegation tokens can be used via the delegation query parameter.","title":"webHDFS via HTTP artifacts"},{"location":"use-cases/webhdfs/#webhdfs-via-http-artifacts","text":"webHDFS is a protocol allowing to access Hadoop or similar data storage via a unified REST API.","title":"webHDFS via HTTP artifacts"},{"location":"use-cases/webhdfs/#input-artifacts","text":"You can use HTTP artifacts to connect to webHDFS, where the URL will be the webHDFS endpoint including the file path and any query parameters. Suppose your webHDFS endpoint is available under https://mywebhdfsprovider.com/webhdfs/v1/ and you have a file my-art.txt located in a data folder, which you want to use as an input artifact. To construct the URL, you append the file path to the base webHDFS endpoint and set the OPEN operation via query parameter. The result is: https://mywebhdfsprovider.com/webhdfs/v1/data/my-art.txt?op=OPEN . See the below Workflow which will download the specified webHDFS artifact into the specified path : spec : # ... inputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/file.txt?op=OPEN\" Additional fields can be set for HTTP artifacts (for example, headers). See usage in the full webHDFS example .","title":"Input Artifacts"},{"location":"use-cases/webhdfs/#output-artifacts","text":"To declare a webHDFS output artifact, instead use the CREATE operation and set the file path to your desired location. In the below example, the artifact will be stored at outputs/newfile.txt . You can overwrite existing files with overwrite=true . spec : # ... outputs : artifacts : - name : my-art path : /my-artifact http : url : \"https://mywebhdfsprovider.com/webhdfs/v1/outputs/newfile.txt?op=CREATE&overwrite=true\"","title":"Output Artifacts"},{"location":"use-cases/webhdfs/#authentication","text":"The above examples show minimal use cases without authentication. However, in a real-world scenario, you may want to use authentication. The authentication mechanism is limited to those supported by HTTP artifacts: HTTP Basic Auth OAuth2 Client Certificates Examples for the latter two mechanisms can be found in the full webHDFS example . Provider dependent While your webHDFS provider may support the above mechanisms, Hadoop itself only supports authentication via Kerberos SPNEGO and Hadoop delegation token. HTTP artifacts do not currently support SPNEGO, but delegation tokens can be used via the delegation query parameter.","title":"Authentication"},{"location":"walk-through/","text":"About \u00b6 Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed using kubectl and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo software is light-weight and installs in under a minute, and provides complete workflow features including parameter substitution, artifacts, fixtures, loops and recursive workflows. Dozens of examples are available in the examples directory on GitHub. For a complete description of the Argo workflow spec, please refer to the spec documentation . Progress through these examples in sequence to learn all the basics. Start with Argo CLI .","title":"About"},{"location":"walk-through/#about","text":"Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed using kubectl and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo software is light-weight and installs in under a minute, and provides complete workflow features including parameter substitution, artifacts, fixtures, loops and recursive workflows. Dozens of examples are available in the examples directory on GitHub. For a complete description of the Argo workflow spec, please refer to the spec documentation . Progress through these examples in sequence to learn all the basics. Start with Argo CLI .","title":"About"},{"location":"walk-through/argo-cli/","text":"Argo CLI \u00b6 Installation \u00b6 To install the Argo CLI, follow the instructions on the GitHub Releases page . Usage \u00b6 In case you want to follow along with this walk-through, here's a quick overview of the most useful argo command line interface (CLI) commands. argo submit hello-world.yaml # submit a workflow spec to Kubernetes argo list # list current workflows argo get hello-world-xxx # get info about a specific workflow argo logs hello-world-xxx # print the logs from a workflow argo delete hello-world-xxx # delete workflow You can also run workflow specs directly using kubectl , but the Argo CLI provides syntax checking, nicer output, and requires less typing. See the CLI Reference for more details.","title":"Argo CLI"},{"location":"walk-through/argo-cli/#argo-cli","text":"","title":"Argo CLI"},{"location":"walk-through/argo-cli/#installation","text":"To install the Argo CLI, follow the instructions on the GitHub Releases page .","title":"Installation"},{"location":"walk-through/argo-cli/#usage","text":"In case you want to follow along with this walk-through, here's a quick overview of the most useful argo command line interface (CLI) commands. argo submit hello-world.yaml # submit a workflow spec to Kubernetes argo list # list current workflows argo get hello-world-xxx # get info about a specific workflow argo logs hello-world-xxx # print the logs from a workflow argo delete hello-world-xxx # delete workflow You can also run workflow specs directly using kubectl , but the Argo CLI provides syntax checking, nicer output, and requires less typing. See the CLI Reference for more details.","title":"Usage"},{"location":"walk-through/artifacts/","text":"Artifacts \u00b6 Note You will need to configure an artifact repository to run this example. When running workflows, it is very common to have steps that generate or consume artifacts. Often, the output artifacts of one step may be used as input artifacts to a subsequent step. The below workflow spec consists of two steps that run in sequence. The first step named generate-artifact will generate an artifact using the whalesay template that will be consumed by the second step named print-message that then consumes the generated artifact. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-passing- spec : entrypoint : artifact-example templates : - name : artifact-example steps : - - name : generate-artifact template : whalesay - - name : consume-artifact template : print-message arguments : artifacts : # bind message to the hello-art artifact # generated by the generate-artifact step - name : message from : \"{{steps.generate-artifact.outputs.artifacts.hello-art}}\" - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"cowsay hello world | tee /tmp/hello_world.txt\" ] outputs : artifacts : # generate hello-art artifact from /tmp/hello_world.txt # artifacts can be directories as well as files - name : hello-art path : /tmp/hello_world.txt - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at /tmp/message - name : message path : /tmp/message container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/message\" ] The whalesay template uses the cowsay command to generate a file named /tmp/hello-world.txt . It then outputs this file as an artifact named hello-art . In general, the artifact's path may be a directory rather than just a file. The print-message template takes an input artifact named message , unpacks it at the path named /tmp/message and then prints the contents of /tmp/message using the cat command. The artifact-example template passes the hello-art artifact generated as an output of the generate-artifact step as the message input artifact to the print-message step. DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-artifact.outputs.artifacts.hello-art}} . Optionally, for large artifacts, you can set podSpecPatch in the workflow spec to increase the resource request for the init container and avoid any Out of memory issues. <... snipped ...> - name : large-artifact # below patch gets merged with the actual pod spec and increses the memory # request of the init container. podSpecPatch : | initContainers: - name: init resources: requests: memory: 2Gi cpu: 300m inputs : artifacts : - name : data path : /tmp/large-file container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/large-file\" ] <... snipped ...> Artifacts are packaged as Tarballs and gzipped by default. You may customize this behavior by specifying an archive strategy, using the archive field. For example: <... snipped ...> outputs : artifacts : # default behavior - tar+gzip default compression. - name : hello-art-1 path : /tmp/hello_world.txt # disable archiving entirely - upload the file / directory as is. # this is useful when the container layout matches the desired target repository layout. - name : hello-art-2 path : /tmp/hello_world.txt archive : none : {} # customize the compression behavior (disabling it here). # this is useful for files with varying compression benefits, # e.g. disabling compression for a cached build workspace and large binaries, # or increasing compression for \"perfect\" textual data - like a json/xml export of a large database. - name : hello-art-3 path : /tmp/hello_world.txt archive : tar : # no compression (also accepts the standard gzip 1 to 9 values) compressionLevel : 0 <... snipped ...> Artifact Garbage Collection \u00b6 As of version 3.4 you can configure your Workflow to automatically delete Artifacts that you don't need (visit artifact repository capability for the current supported store engine). Artifacts can be deleted OnWorkflowCompletion or OnWorkflowDeletion . You can specify your Garbage Collection strategy on both the Workflow level and the Artifact level, so for example, you may have temporary artifacts that can be deleted right away but a final output that should be persisted: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion # default Strategy set here applies to all Artifacts by default templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact.txt - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this.txt artifactGC : strategy : Never # optional override for an Artifact Artifact Naming \u00b6 Consider parameterizing your S3 keys by {{workflow.uid}}, etc (as shown in the example above) if there's a possibility that you could have concurrent Workflows of the same spec. This would be to avoid a scenario in which the artifact from one Workflow is being deleted while the same S3 key is being generated for a different Workflow. Service Accounts and Annotations \u00b6 Does your S3 bucket require you to run with a special Service Account or IAM Role Annotation? You can either use the same ones you use for creating artifacts or generate new ones that are specific for deletion permission. Generally users will probably just have a single Service Account or IAM Role to apply to all artifacts for the Workflow, but you can also customize on the artifact level if you need that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion ############################################################################################## # Workflow Level Service Account and Metadata ############################################################################################## serviceAccountName : my-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/my-iam-role templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact-{{workflow.uid}}.txt artifactGC : #################################################################################### # Optional override capability #################################################################################### serviceAccountName : artifact-specific-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/artifact-specific-iam-role - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this-{{workflow.uid}}.txt artifactGC : strategy : Never If you do supply your own Service Account you will need to create a RoleBinding that binds it with a role like this: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : annotations : workflows.argoproj.io/description : | This is the minimum recommended permissions needed if you want to use artifact GC. name : artifactgc rules : - apiGroups : - argoproj.io resources : - workflowartifactgctasks verbs : - list - watch - apiGroups : - argoproj.io resources : - workflowartifactgctasks/status verbs : - patch This is the artifactgc role if you installed using one of the quick-start manifest files. If you installed with the install.yaml file for the release then the same permissions are in the argo-cluster-role . If you don't use your own ServiceAccount and are just using default ServiceAccount, then the role needs a RoleBinding or ClusterRoleBinding to default ServiceAccount. What happens if Garbage Collection fails? \u00b6 If deletion of the artifact fails for some reason (other than the Artifact already having been deleted which is not considered a failure), the Workflow's Status will be marked with a new Condition to indicate \"Artifact GC Failure\", a Kubernetes Event will be issued, and the Argo Server UI will also indicate the failure. For additional debugging, the user should find 1 or more Pods named -artgc-* and can view the logs. If the user needs to delete the Workflow and its child CRD objects, they will need to patch the Workflow to remove the finalizer preventing the deletion: apiVersion : argoproj.io/v1alpha1 kind : Workflow finalizers : - workflows.argoproj.io/artifact-gc The finalizer can be deleted by doing: kubectl patch workflow my-wf \\ --type json \\ --patch = '[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]' Or for simplicity use the Argo CLI argo delete command with flag --force , which under the hood removes the finalizer before performing the deletion. Release Versions >= 3.5 \u00b6 A flag has been added to the Workflow Spec called forceFinalizerRemoval (see here ) to force the finalizer's removal even if Artifact GC fails: spec : artifactGC : strategy : OnWorkflowDeletion forceFinalizerRemoval : true","title":"Artifacts"},{"location":"walk-through/artifacts/#artifacts","text":"Note You will need to configure an artifact repository to run this example. When running workflows, it is very common to have steps that generate or consume artifacts. Often, the output artifacts of one step may be used as input artifacts to a subsequent step. The below workflow spec consists of two steps that run in sequence. The first step named generate-artifact will generate an artifact using the whalesay template that will be consumed by the second step named print-message that then consumes the generated artifact. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-passing- spec : entrypoint : artifact-example templates : - name : artifact-example steps : - - name : generate-artifact template : whalesay - - name : consume-artifact template : print-message arguments : artifacts : # bind message to the hello-art artifact # generated by the generate-artifact step - name : message from : \"{{steps.generate-artifact.outputs.artifacts.hello-art}}\" - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"cowsay hello world | tee /tmp/hello_world.txt\" ] outputs : artifacts : # generate hello-art artifact from /tmp/hello_world.txt # artifacts can be directories as well as files - name : hello-art path : /tmp/hello_world.txt - name : print-message inputs : artifacts : # unpack the message input artifact # and put it at /tmp/message - name : message path : /tmp/message container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/message\" ] The whalesay template uses the cowsay command to generate a file named /tmp/hello-world.txt . It then outputs this file as an artifact named hello-art . In general, the artifact's path may be a directory rather than just a file. The print-message template takes an input artifact named message , unpacks it at the path named /tmp/message and then prints the contents of /tmp/message using the cat command. The artifact-example template passes the hello-art artifact generated as an output of the generate-artifact step as the message input artifact to the print-message step. DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-artifact.outputs.artifacts.hello-art}} . Optionally, for large artifacts, you can set podSpecPatch in the workflow spec to increase the resource request for the init container and avoid any Out of memory issues. <... snipped ...> - name : large-artifact # below patch gets merged with the actual pod spec and increses the memory # request of the init container. podSpecPatch : | initContainers: - name: init resources: requests: memory: 2Gi cpu: 300m inputs : artifacts : - name : data path : /tmp/large-file container : image : alpine:latest command : [ sh , -c ] args : [ \"cat /tmp/large-file\" ] <... snipped ...> Artifacts are packaged as Tarballs and gzipped by default. You may customize this behavior by specifying an archive strategy, using the archive field. For example: <... snipped ...> outputs : artifacts : # default behavior - tar+gzip default compression. - name : hello-art-1 path : /tmp/hello_world.txt # disable archiving entirely - upload the file / directory as is. # this is useful when the container layout matches the desired target repository layout. - name : hello-art-2 path : /tmp/hello_world.txt archive : none : {} # customize the compression behavior (disabling it here). # this is useful for files with varying compression benefits, # e.g. disabling compression for a cached build workspace and large binaries, # or increasing compression for \"perfect\" textual data - like a json/xml export of a large database. - name : hello-art-3 path : /tmp/hello_world.txt archive : tar : # no compression (also accepts the standard gzip 1 to 9 values) compressionLevel : 0 <... snipped ...>","title":"Artifacts"},{"location":"walk-through/artifacts/#artifact-garbage-collection","text":"As of version 3.4 you can configure your Workflow to automatically delete Artifacts that you don't need (visit artifact repository capability for the current supported store engine). Artifacts can be deleted OnWorkflowCompletion or OnWorkflowDeletion . You can specify your Garbage Collection strategy on both the Workflow level and the Artifact level, so for example, you may have temporary artifacts that can be deleted right away but a final output that should be persisted: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion # default Strategy set here applies to all Artifacts by default templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact.txt - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this.txt artifactGC : strategy : Never # optional override for an Artifact","title":"Artifact Garbage Collection"},{"location":"walk-through/artifacts/#artifact-naming","text":"Consider parameterizing your S3 keys by {{workflow.uid}}, etc (as shown in the example above) if there's a possibility that you could have concurrent Workflows of the same spec. This would be to avoid a scenario in which the artifact from one Workflow is being deleted while the same S3 key is being generated for a different Workflow.","title":"Artifact Naming"},{"location":"walk-through/artifacts/#service-accounts-and-annotations","text":"Does your S3 bucket require you to run with a special Service Account or IAM Role Annotation? You can either use the same ones you use for creating artifacts or generate new ones that are specific for deletion permission. Generally users will probably just have a single Service Account or IAM Role to apply to all artifacts for the Workflow, but you can also customize on the artifact level if you need that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : artifact-gc- spec : entrypoint : main artifactGC : strategy : OnWorkflowDeletion ############################################################################################## # Workflow Level Service Account and Metadata ############################################################################################## serviceAccountName : my-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/my-iam-role templates : - name : main container : image : argoproj/argosay:v2 command : - sh - -c args : - | echo \"can throw this away\" > /tmp/temporary-artifact.txt echo \"keep this\" > /tmp/keep-this.txt outputs : artifacts : - name : temporary-artifact path : /tmp/temporary-artifact.txt s3 : key : temporary-artifact-{{workflow.uid}}.txt artifactGC : #################################################################################### # Optional override capability #################################################################################### serviceAccountName : artifact-specific-sa podMetadata : annotations : eks.amazonaws.com/role-arn : arn:aws:iam::111122223333:role/artifact-specific-iam-role - name : keep-this path : /tmp/keep-this.txt s3 : key : keep-this-{{workflow.uid}}.txt artifactGC : strategy : Never If you do supply your own Service Account you will need to create a RoleBinding that binds it with a role like this: apiVersion : rbac.authorization.k8s.io/v1 kind : Role metadata : annotations : workflows.argoproj.io/description : | This is the minimum recommended permissions needed if you want to use artifact GC. name : artifactgc rules : - apiGroups : - argoproj.io resources : - workflowartifactgctasks verbs : - list - watch - apiGroups : - argoproj.io resources : - workflowartifactgctasks/status verbs : - patch This is the artifactgc role if you installed using one of the quick-start manifest files. If you installed with the install.yaml file for the release then the same permissions are in the argo-cluster-role . If you don't use your own ServiceAccount and are just using default ServiceAccount, then the role needs a RoleBinding or ClusterRoleBinding to default ServiceAccount.","title":"Service Accounts and Annotations"},{"location":"walk-through/artifacts/#what-happens-if-garbage-collection-fails","text":"If deletion of the artifact fails for some reason (other than the Artifact already having been deleted which is not considered a failure), the Workflow's Status will be marked with a new Condition to indicate \"Artifact GC Failure\", a Kubernetes Event will be issued, and the Argo Server UI will also indicate the failure. For additional debugging, the user should find 1 or more Pods named -artgc-* and can view the logs. If the user needs to delete the Workflow and its child CRD objects, they will need to patch the Workflow to remove the finalizer preventing the deletion: apiVersion : argoproj.io/v1alpha1 kind : Workflow finalizers : - workflows.argoproj.io/artifact-gc The finalizer can be deleted by doing: kubectl patch workflow my-wf \\ --type json \\ --patch = '[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]' Or for simplicity use the Argo CLI argo delete command with flag --force , which under the hood removes the finalizer before performing the deletion.","title":"What happens if Garbage Collection fails?"},{"location":"walk-through/artifacts/#release-versions-35","text":"A flag has been added to the Workflow Spec called forceFinalizerRemoval (see here ) to force the finalizer's removal even if Artifact GC fails: spec : artifactGC : strategy : OnWorkflowDeletion forceFinalizerRemoval : true","title":"Release Versions >= 3.5"},{"location":"walk-through/conditionals/","text":"Conditionals \u00b6 We also support conditional execution. The syntax is implemented by govaluate which offers the support for complex syntax. See in the example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails # call tails template if \"tails\" when : \"{{steps.flip-coin.outputs.result}} == tails\" - - name : flip-again template : flip-coin - - name : complex-condition template : heads-tails-or-twice-tails # call heads template if first flip was \"heads\" and second was \"tails\" OR both were \"tails\" when : >- ( {{steps.flip-coin.outputs.result}} == heads && {{steps.flip-again.outputs.result}} == tails ) || ( {{steps.flip-coin.outputs.result}} == tails && {{steps.flip-again.outputs.result}} == tails ) - name : heads-regex template : heads # call heads template if ~ \"hea\" when : \"{{steps.flip-again.outputs.result}} =~ hea\" - name : tails-regex template : tails # call heads template if ~ \"tai\" when : \"{{steps.flip-again.outputs.result}} =~ tai\" # Return heads or tails based on a random number - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was tails\\\"\" ] - name : heads-tails-or-twice-tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads the first flip and tails the second. Or it was two times tails.\\\"\" ] Nested Quotes If the parameter value contains quotes, it may invalidate the govaluate expression. To handle parameters with quotes, embed an expr expression in the conditional. For example: when : \"{{=inputs.parameters['may-contain-quotes'] == 'example'}}\"","title":"Conditionals"},{"location":"walk-through/conditionals/#conditionals","text":"We also support conditional execution. The syntax is implemented by govaluate which offers the support for complex syntax. See in the example: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails template : tails # call tails template if \"tails\" when : \"{{steps.flip-coin.outputs.result}} == tails\" - - name : flip-again template : flip-coin - - name : complex-condition template : heads-tails-or-twice-tails # call heads template if first flip was \"heads\" and second was \"tails\" OR both were \"tails\" when : >- ( {{steps.flip-coin.outputs.result}} == heads && {{steps.flip-again.outputs.result}} == tails ) || ( {{steps.flip-coin.outputs.result}} == tails && {{steps.flip-again.outputs.result}} == tails ) - name : heads-regex template : heads # call heads template if ~ \"hea\" when : \"{{steps.flip-again.outputs.result}} =~ hea\" - name : tails-regex template : tails # call heads template if ~ \"tai\" when : \"{{steps.flip-again.outputs.result}} =~ tai\" # Return heads or tails based on a random number - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] - name : tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was tails\\\"\" ] - name : heads-tails-or-twice-tails container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads the first flip and tails the second. Or it was two times tails.\\\"\" ] Nested Quotes If the parameter value contains quotes, it may invalidate the govaluate expression. To handle parameters with quotes, embed an expr expression in the conditional. For example: when : \"{{=inputs.parameters['may-contain-quotes'] == 'example'}}\"","title":"Conditionals"},{"location":"walk-through/continuous-integration-examples/","text":"Continuous Integration Examples \u00b6 Continuous integration is a popular application for workflows. Some quick examples of CI workflows: https://github.com/argoproj/argo-workflows/tree/main/examples/ci.yaml https://github.com/argoproj/argo-workflows/tree/main/examples/influxdb-ci.yaml And a CI WorkflowTemplate example: https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml A more detailed example is https://github.com/sendible-labs/argo-workflows-ci-example , which allows you to create a local CI workflow for the purposes of learning.","title":"Continuous Integration Examples"},{"location":"walk-through/continuous-integration-examples/#continuous-integration-examples","text":"Continuous integration is a popular application for workflows. Some quick examples of CI workflows: https://github.com/argoproj/argo-workflows/tree/main/examples/ci.yaml https://github.com/argoproj/argo-workflows/tree/main/examples/influxdb-ci.yaml And a CI WorkflowTemplate example: https://github.com/argoproj/argo-workflows/blob/main/examples/ci-workflowtemplate.yaml A more detailed example is https://github.com/sendible-labs/argo-workflows-ci-example , which allows you to create a local CI workflow for the purposes of learning.","title":"Continuous Integration Examples"},{"location":"walk-through/custom-template-variable-reference/","text":"Custom Template Variable Reference \u00b6 In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template. Argo will validate and resolve only the variable that starts with an Argo allowed prefix { \"item\", \"steps\", \"inputs\", \"outputs\", \"workflow\", \"tasks\" } apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : custom-template-variable- spec : entrypoint : hello-hello-hello templates : - name : hello-hello-hello steps : - - name : hello1 template : whalesay arguments : parameters : [{ name : message , value : \"hello1\" }] - - name : hello2a template : whalesay arguments : parameters : [{ name : message , value : \"hello2a\" }] - name : hello2b template : whalesay arguments : parameters : [{ name : message , value : \"hello2b\" }] - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{user.username}}\" ]","title":"Custom Template Variable Reference"},{"location":"walk-through/custom-template-variable-reference/#custom-template-variable-reference","text":"In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template. Argo will validate and resolve only the variable that starts with an Argo allowed prefix { \"item\", \"steps\", \"inputs\", \"outputs\", \"workflow\", \"tasks\" } apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : custom-template-variable- spec : entrypoint : hello-hello-hello templates : - name : hello-hello-hello steps : - - name : hello1 template : whalesay arguments : parameters : [{ name : message , value : \"hello1\" }] - - name : hello2a template : whalesay arguments : parameters : [{ name : message , value : \"hello2a\" }] - name : hello2b template : whalesay arguments : parameters : [{ name : message , value : \"hello2b\" }] - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{user.username}}\" ]","title":"Custom Template Variable Reference"},{"location":"walk-through/daemon-containers/","text":"Daemon Containers \u00b6 Argo workflows can start containers that run in the background (also known as daemon containers ) while the workflow itself continues execution. Note that the daemons will be automatically destroyed when the workflow exits the template scope in which the daemon was invoked. Daemon containers are useful for starting up services to be tested or to be used in testing (e.g., fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : daemon-step- spec : entrypoint : daemon-example templates : - name : daemon-example steps : - - name : influx template : influxdb # start an influxdb as a daemon (see the influxdb template spec below) - - name : init-database # initialize influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/query' --data-urlencode \"q=CREATE DATABASE mydb\" - - name : producer-1 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server01,region=uswest load=$i\" ; sleep .5 ; done - name : producer-2 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server02,region=uswest load=$((RANDOM % 100))\" ; sleep .5 ; done - name : producer-3 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d 'cpu,host=server03,region=useast load=15.4' - - name : consumer # consume intries from influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl --silent -G http://{{steps.influx.ip}}:8086/query?pretty=true --data-urlencode \"db=mydb\" --data-urlencode \"q=SELECT * FROM cpu\" - name : influxdb daemon : true # start influxdb as a daemon retryStrategy : limit : 10 # retry container if it fails container : image : influxdb:1.2 command : - influxd readinessProbe : # wait for readinessProbe to succeed httpGet : path : /ping port : 8086 - name : influxdb-client inputs : parameters : - name : cmd container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.cmd}}\" ] resources : requests : memory : 32Mi cpu : 100m Step templates use the steps prefix to refer to another step: for example {{steps.influx.ip}} . In DAG templates, the tasks prefix is used instead: for example {{tasks.influx.ip}} .","title":"Daemon Containers"},{"location":"walk-through/daemon-containers/#daemon-containers","text":"Argo workflows can start containers that run in the background (also known as daemon containers ) while the workflow itself continues execution. Note that the daemons will be automatically destroyed when the workflow exits the template scope in which the daemon was invoked. Daemon containers are useful for starting up services to be tested or to be used in testing (e.g., fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : daemon-step- spec : entrypoint : daemon-example templates : - name : daemon-example steps : - - name : influx template : influxdb # start an influxdb as a daemon (see the influxdb template spec below) - - name : init-database # initialize influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/query' --data-urlencode \"q=CREATE DATABASE mydb\" - - name : producer-1 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server01,region=uswest load=$i\" ; sleep .5 ; done - name : producer-2 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d \"cpu,host=server02,region=uswest load=$((RANDOM % 100))\" ; sleep .5 ; done - name : producer-3 # add entries to influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d 'cpu,host=server03,region=useast load=15.4' - - name : consumer # consume intries from influxdb template : influxdb-client arguments : parameters : - name : cmd value : curl --silent -G http://{{steps.influx.ip}}:8086/query?pretty=true --data-urlencode \"db=mydb\" --data-urlencode \"q=SELECT * FROM cpu\" - name : influxdb daemon : true # start influxdb as a daemon retryStrategy : limit : 10 # retry container if it fails container : image : influxdb:1.2 command : - influxd readinessProbe : # wait for readinessProbe to succeed httpGet : path : /ping port : 8086 - name : influxdb-client inputs : parameters : - name : cmd container : image : appropriate/curl:latest command : [ \"/bin/sh\" , \"-c\" ] args : [ \"{{inputs.parameters.cmd}}\" ] resources : requests : memory : 32Mi cpu : 100m Step templates use the steps prefix to refer to another step: for example {{steps.influx.ip}} . In DAG templates, the tasks prefix is used instead: for example {{tasks.influx.ip}} .","title":"Daemon Containers"},{"location":"walk-through/dag/","text":"DAG \u00b6 As an alternative to specifying sequences of steps , you can define a workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. DAGs can be simpler to maintain for complex workflows and allow for maximum parallelism when running tasks. In the following workflow, step A runs first, as it has no dependencies. Once A has finished, steps B and C run in parallel. Finally, once B and C have completed, step D runs. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : dag-diamond- spec : entrypoint : diamond templates : - name : echo inputs : parameters : - name : message container : image : alpine:3.7 command : [ echo , \"{{inputs.parameters.message}}\" ] - name : diamond dag : tasks : - name : A template : echo arguments : parameters : [{ name : message , value : A }] - name : B dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : B }] - name : C dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : C }] - name : D dependencies : [ B , C ] template : echo arguments : parameters : [{ name : message , value : D }] The dependency graph may have multiple roots . The templates called from a DAG or steps template can themselves be DAG or steps templates, allowing complex workflows to be split into manageable pieces. Enhanced Depends \u00b6 For more complicated, conditional dependencies, you can use the Enhanced Depends feature. Fail Fast \u00b6 By default, DAGs fail fast: when one task fails, no new tasks will be scheduled. Once all running tasks are completed, the DAG will be marked as failed. If failFast is set to false for a DAG, all branches will run to completion, regardless of failures in other branches.","title":"DAG"},{"location":"walk-through/dag/#dag","text":"As an alternative to specifying sequences of steps , you can define a workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. DAGs can be simpler to maintain for complex workflows and allow for maximum parallelism when running tasks. In the following workflow, step A runs first, as it has no dependencies. Once A has finished, steps B and C run in parallel. Finally, once B and C have completed, step D runs. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : dag-diamond- spec : entrypoint : diamond templates : - name : echo inputs : parameters : - name : message container : image : alpine:3.7 command : [ echo , \"{{inputs.parameters.message}}\" ] - name : diamond dag : tasks : - name : A template : echo arguments : parameters : [{ name : message , value : A }] - name : B dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : B }] - name : C dependencies : [ A ] template : echo arguments : parameters : [{ name : message , value : C }] - name : D dependencies : [ B , C ] template : echo arguments : parameters : [{ name : message , value : D }] The dependency graph may have multiple roots . The templates called from a DAG or steps template can themselves be DAG or steps templates, allowing complex workflows to be split into manageable pieces.","title":"DAG"},{"location":"walk-through/dag/#enhanced-depends","text":"For more complicated, conditional dependencies, you can use the Enhanced Depends feature.","title":"Enhanced Depends"},{"location":"walk-through/dag/#fail-fast","text":"By default, DAGs fail fast: when one task fails, no new tasks will be scheduled. Once all running tasks are completed, the DAG will be marked as failed. If failFast is set to false for a DAG, all branches will run to completion, regardless of failures in other branches.","title":"Fail Fast"},{"location":"walk-through/docker-in-docker-using-sidecars/","text":"Docker-in-Docker Using Sidecars \u00b6 Note: It is increasingly unlikely that the below example will work for you on your version of Kubernetes. Since Kubernetes 1.24, the dockershim has been unavailable as part of Kubernetes , rendering Docker-in-Docker unworkable. It is recommended to seek alternative methods of building containers, such as Kaniko or Buildkit . A Buildkit Workflow example is available in the examples directory of the Argo Workflows repository. An application of sidecars is to implement Docker-in-Docker (DIND). DIND is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind image to run a Docker daemon in a sidecar and give the main container access to the daemon. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-dind- spec : entrypoint : dind-sidecar-example templates : - name : dind-sidecar-example container : image : docker:19.03.13 command : [ sh , -c ] args : [ \"until docker ps; do sleep 3; done; docker run --rm debian:latest cat /etc/os-release\" ] env : - name : DOCKER_HOST # the docker daemon can be access on the standard port on localhost value : 127.0.0.1 sidecars : - name : dind image : docker:19.03.13-dind # Docker already provides an image for running a Docker daemon command : [ dockerd-entrypoint.sh ] env : - name : DOCKER_TLS_CERTDIR # Docker TLS env config value : \"\" securityContext : privileged : true # the Docker daemon can only run in a privileged container # mirrorVolumeMounts will mount the same volumes specified in the main container # to the sidecar (including artifacts), at the same mountPaths. This enables # dind daemon to (partially) see the same filesystem as the main container in # order to use features such as docker volume binding. mirrorVolumeMounts : true","title":"Docker-in-Docker Using Sidecars"},{"location":"walk-through/docker-in-docker-using-sidecars/#docker-in-docker-using-sidecars","text":"Note: It is increasingly unlikely that the below example will work for you on your version of Kubernetes. Since Kubernetes 1.24, the dockershim has been unavailable as part of Kubernetes , rendering Docker-in-Docker unworkable. It is recommended to seek alternative methods of building containers, such as Kaniko or Buildkit . A Buildkit Workflow example is available in the examples directory of the Argo Workflows repository. An application of sidecars is to implement Docker-in-Docker (DIND). DIND is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind image to run a Docker daemon in a sidecar and give the main container access to the daemon. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-dind- spec : entrypoint : dind-sidecar-example templates : - name : dind-sidecar-example container : image : docker:19.03.13 command : [ sh , -c ] args : [ \"until docker ps; do sleep 3; done; docker run --rm debian:latest cat /etc/os-release\" ] env : - name : DOCKER_HOST # the docker daemon can be access on the standard port on localhost value : 127.0.0.1 sidecars : - name : dind image : docker:19.03.13-dind # Docker already provides an image for running a Docker daemon command : [ dockerd-entrypoint.sh ] env : - name : DOCKER_TLS_CERTDIR # Docker TLS env config value : \"\" securityContext : privileged : true # the Docker daemon can only run in a privileged container # mirrorVolumeMounts will mount the same volumes specified in the main container # to the sidecar (including artifacts), at the same mountPaths. This enables # dind daemon to (partially) see the same filesystem as the main container in # order to use features such as docker volume binding. mirrorVolumeMounts : true","title":"Docker-in-Docker Using Sidecars"},{"location":"walk-through/exit-handlers/","text":"Exit handlers \u00b6 An exit handler is a template that always executes, irrespective of success or failure, at the end of the workflow. Some common use cases of exit handlers are: cleaning up after a workflow runs sending notifications of workflow status (e.g., e-mail/Slack) posting the pass/fail status to a web-hook result (e.g. GitHub build result) resubmitting or submitting another workflow apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : exit-handlers- spec : entrypoint : intentional-fail onExit : exit-handler # invoke exit-handler template at end of the workflow templates : # primary workflow template - name : intentional-fail container : image : alpine:latest command : [ sh , -c ] args : [ \"echo intentional failure; exit 1\" ] # Exit handler templates # After the completion of the entrypoint template, the status of the # workflow is made available in the global variable {{workflow.status}}. # {{workflow.status}} will be one of: Succeeded, Failed, Error - name : exit-handler steps : - - name : notify template : send-email - name : celebrate template : celebrate when : \"{{workflow.status}} == Succeeded\" - name : cry template : cry when : \"{{workflow.status}} != Succeeded\" - name : send-email container : image : alpine:latest command : [ sh , -c ] args : [ \"echo send e-mail: {{workflow.name}} {{workflow.status}} {{workflow.duration}}\" ] - name : celebrate container : image : alpine:latest command : [ sh , -c ] args : [ \"echo hooray!\" ] - name : cry container : image : alpine:latest command : [ sh , -c ] args : [ \"echo boohoo!\" ]","title":"Exit handlers"},{"location":"walk-through/exit-handlers/#exit-handlers","text":"An exit handler is a template that always executes, irrespective of success or failure, at the end of the workflow. Some common use cases of exit handlers are: cleaning up after a workflow runs sending notifications of workflow status (e.g., e-mail/Slack) posting the pass/fail status to a web-hook result (e.g. GitHub build result) resubmitting or submitting another workflow apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : exit-handlers- spec : entrypoint : intentional-fail onExit : exit-handler # invoke exit-handler template at end of the workflow templates : # primary workflow template - name : intentional-fail container : image : alpine:latest command : [ sh , -c ] args : [ \"echo intentional failure; exit 1\" ] # Exit handler templates # After the completion of the entrypoint template, the status of the # workflow is made available in the global variable {{workflow.status}}. # {{workflow.status}} will be one of: Succeeded, Failed, Error - name : exit-handler steps : - - name : notify template : send-email - name : celebrate template : celebrate when : \"{{workflow.status}} == Succeeded\" - name : cry template : cry when : \"{{workflow.status}} != Succeeded\" - name : send-email container : image : alpine:latest command : [ sh , -c ] args : [ \"echo send e-mail: {{workflow.name}} {{workflow.status}} {{workflow.duration}}\" ] - name : celebrate container : image : alpine:latest command : [ sh , -c ] args : [ \"echo hooray!\" ] - name : cry container : image : alpine:latest command : [ sh , -c ] args : [ \"echo boohoo!\" ]","title":"Exit handlers"},{"location":"walk-through/hardwired-artifacts/","text":"Hardwired Artifacts \u00b6 You can use any container image to generate any kind of artifact. In practice, however, certain types of artifacts are very common, so there is built-in support for git, HTTP, GCS, and S3 artifacts. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hardwired-artifact- spec : entrypoint : hardwired-artifact templates : - name : hardwired-artifact inputs : artifacts : # Check out the main branch of the argo repo and place it at /src # revision can be anything that git checkout accepts: branch, commit, tag, etc. - name : argo-source path : /src git : repo : https://github.com/argoproj/argo-workflows.git revision : \"main\" # Download kubectl 1.8.0 and place it at /bin/kubectl - name : kubectl path : /bin/kubectl mode : 0755 http : url : https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl # Copy an s3 compatible artifact repository bucket (such as AWS, GCS and MinIO) and place it at /s3 - name : objects path : /s3 s3 : endpoint : storage.googleapis.com bucket : my-bucket-name key : path/in/bucket accessKeySecret : name : my-s3-credentials key : accessKey secretKeySecret : name : my-s3-credentials key : secretKey container : image : debian command : [ sh , -c ] args : [ \"ls -l /src /bin/kubectl /s3\" ]","title":"Hardwired Artifacts"},{"location":"walk-through/hardwired-artifacts/#hardwired-artifacts","text":"You can use any container image to generate any kind of artifact. In practice, however, certain types of artifacts are very common, so there is built-in support for git, HTTP, GCS, and S3 artifacts. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hardwired-artifact- spec : entrypoint : hardwired-artifact templates : - name : hardwired-artifact inputs : artifacts : # Check out the main branch of the argo repo and place it at /src # revision can be anything that git checkout accepts: branch, commit, tag, etc. - name : argo-source path : /src git : repo : https://github.com/argoproj/argo-workflows.git revision : \"main\" # Download kubectl 1.8.0 and place it at /bin/kubectl - name : kubectl path : /bin/kubectl mode : 0755 http : url : https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl # Copy an s3 compatible artifact repository bucket (such as AWS, GCS and MinIO) and place it at /s3 - name : objects path : /s3 s3 : endpoint : storage.googleapis.com bucket : my-bucket-name key : path/in/bucket accessKeySecret : name : my-s3-credentials key : accessKey secretKeySecret : name : my-s3-credentials key : secretKey container : image : debian command : [ sh , -c ] args : [ \"ls -l /src /bin/kubectl /s3\" ]","title":"Hardwired Artifacts"},{"location":"walk-through/hello-world/","text":"Hello World \u00b6 Let's start by creating a very simple workflow template to echo \"hello world\" using the docker/whalesay container image from Docker Hub. You can run this directly from your shell with a simple docker command: $ docker run docker/whalesay cowsay \"hello world\" _____________ < hello world > ------------- \\ \\ \\ ## . ## ## ## == ## ## ## ## === / \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" ___/ === ~~~ { ~~ ~~~~ ~~~ ~~~~ ~~ ~ / === - ~~~ \\_ _____ o __/ \\ \\ __/ \\_ ___ \\_ _____/ Hello from Docker! This message shows that your installation appears to be working correctly. Below, we run the same container on a Kubernetes cluster using an Argo workflow template. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow # new type of k8s spec metadata : generateName : hello-world- # name of the workflow spec spec : entrypoint : whalesay # invoke the whalesay template templates : - name : whalesay # name of the template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] resources : # limit the resources limits : memory : 32Mi cpu : 100m Argo adds a new kind of Kubernetes spec called a Workflow . The above spec contains a single template called whalesay which runs the docker/whalesay container and invokes cowsay \"hello world\" . The whalesay template is the entrypoint for the spec. The entrypoint specifies the initial template that should be invoked when the workflow spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there is more than one template defined in the Kubernetes workflow spec. :-)","title":"Hello World"},{"location":"walk-through/hello-world/#hello-world","text":"Let's start by creating a very simple workflow template to echo \"hello world\" using the docker/whalesay container image from Docker Hub. You can run this directly from your shell with a simple docker command: $ docker run docker/whalesay cowsay \"hello world\" _____________ < hello world > ------------- \\ \\ \\ ## . ## ## ## == ## ## ## ## === / \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" ___/ === ~~~ { ~~ ~~~~ ~~~ ~~~~ ~~ ~ / === - ~~~ \\_ _____ o __/ \\ \\ __/ \\_ ___ \\_ _____/ Hello from Docker! This message shows that your installation appears to be working correctly. Below, we run the same container on a Kubernetes cluster using an Argo workflow template. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow # new type of k8s spec metadata : generateName : hello-world- # name of the workflow spec spec : entrypoint : whalesay # invoke the whalesay template templates : - name : whalesay # name of the template container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] resources : # limit the resources limits : memory : 32Mi cpu : 100m Argo adds a new kind of Kubernetes spec called a Workflow . The above spec contains a single template called whalesay which runs the docker/whalesay container and invokes cowsay \"hello world\" . The whalesay template is the entrypoint for the spec. The entrypoint specifies the initial template that should be invoked when the workflow spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there is more than one template defined in the Kubernetes workflow spec. :-)","title":"Hello World"},{"location":"walk-through/kubernetes-resources/","text":"Kubernetes Resources \u00b6 In many cases, you will want to manage Kubernetes resources from Argo workflows. The resource template allows you to create, delete or updated any type of Kubernetes resource. # in a workflow. The resource template type accepts any k8s manifest # (including CRDs) and can perform any `kubectl` action against it (e.g. create, # apply, delete, patch). apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-jobs- spec : entrypoint : pi-tmpl templates : - name : pi-tmpl resource : # indicates that this is a resource template action : create # can be any kubectl action (e.g. create, delete, apply, patch) # The successCondition and failureCondition are optional expressions. # If failureCondition is true, the step is considered failed. # If successCondition is true, the step is considered successful. # They use kubernetes label selection syntax and can be applied against any field # of the resource (not just labels). Multiple AND conditions can be represented by comma # delimited expressions. # For more details: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ successCondition : status.succeeded > 0 failureCondition : status.failed > 3 manifest : | #put your kubernetes spec here apiVersion: batch/v1 kind: Job metadata: generateName: pi-job- spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 Note: Currently only a single resource can be managed by a resource template so either a generateName or name must be provided in the resource's meta-data. Resources created in this way are independent of the workflow. If you want the resource to be deleted when the workflow is deleted then you can use Kubernetes garbage collection with the workflow resource as an owner reference ( example ). You can also collect data about the resource in output parameters (see more at k8s-jobs.yaml ) Note: When patching, the resource will accept another attribute, mergeStrategy , which can either be strategic , merge , or json . If this attribute is not supplied, it will default to strategic . Keep in mind that Custom Resources cannot be patched with strategic , so a different strategy must be chosen. For example, suppose you have the CronTab CRD defined, and the following instance of a CronTab : apiVersion : \"stable.example.com/v1\" kind : CronTab spec : cronSpec : \"* * * * */5\" image : my-awesome-cron-image This CronTab can be modified using the following Argo Workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-patch- spec : entrypoint : cront-tmpl templates : - name : cront-tmpl resource : action : patch mergeStrategy : merge # Must be one of [strategic merge json] manifest : | apiVersion: \"stable.example.com/v1\" kind: CronTab spec: cronSpec: \"* * * * */10\" image: my-awesome-cron-image","title":"Kubernetes Resources"},{"location":"walk-through/kubernetes-resources/#kubernetes-resources","text":"In many cases, you will want to manage Kubernetes resources from Argo workflows. The resource template allows you to create, delete or updated any type of Kubernetes resource. # in a workflow. The resource template type accepts any k8s manifest # (including CRDs) and can perform any `kubectl` action against it (e.g. create, # apply, delete, patch). apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-jobs- spec : entrypoint : pi-tmpl templates : - name : pi-tmpl resource : # indicates that this is a resource template action : create # can be any kubectl action (e.g. create, delete, apply, patch) # The successCondition and failureCondition are optional expressions. # If failureCondition is true, the step is considered failed. # If successCondition is true, the step is considered successful. # They use kubernetes label selection syntax and can be applied against any field # of the resource (not just labels). Multiple AND conditions can be represented by comma # delimited expressions. # For more details: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ successCondition : status.succeeded > 0 failureCondition : status.failed > 3 manifest : | #put your kubernetes spec here apiVersion: batch/v1 kind: Job metadata: generateName: pi-job- spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 Note: Currently only a single resource can be managed by a resource template so either a generateName or name must be provided in the resource's meta-data. Resources created in this way are independent of the workflow. If you want the resource to be deleted when the workflow is deleted then you can use Kubernetes garbage collection with the workflow resource as an owner reference ( example ). You can also collect data about the resource in output parameters (see more at k8s-jobs.yaml ) Note: When patching, the resource will accept another attribute, mergeStrategy , which can either be strategic , merge , or json . If this attribute is not supplied, it will default to strategic . Keep in mind that Custom Resources cannot be patched with strategic , so a different strategy must be chosen. For example, suppose you have the CronTab CRD defined, and the following instance of a CronTab : apiVersion : \"stable.example.com/v1\" kind : CronTab spec : cronSpec : \"* * * * */5\" image : my-awesome-cron-image This CronTab can be modified using the following Argo Workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : k8s-patch- spec : entrypoint : cront-tmpl templates : - name : cront-tmpl resource : action : patch mergeStrategy : merge # Must be one of [strategic merge json] manifest : | apiVersion: \"stable.example.com/v1\" kind: CronTab spec: cronSpec: \"* * * * */10\" image: my-awesome-cron-image","title":"Kubernetes Resources"},{"location":"walk-through/loops/","text":"Loops \u00b6 When writing workflows, it is often very useful to be able to iterate over a set of inputs, as this is how argo-workflows can perform loops. There are two basic ways of running a template multiple times. withItems takes a list of things to work on. Either plain, single values, which are then usable in your template as '{{item}}' a JSON object where each element in the object can be addressed by it's key as '{{item.key}}' withParam takes a JSON array of items, and iterates over it - again the items can be objects like with withItems . This is very powerful, as you can generate the JSON in another step in your workflow, so creating a dynamic workflow. withItems basic example \u00b6 This example is the simplest. We are taking a basic list of items and iterating over it with withItems . It is limited to one varying field for each of the workflow templates instantiated. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops- spec : entrypoint : loop-example templates : - name : loop-example steps : - - name : print-message template : whalesay arguments : parameters : - name : message value : \"{{item}}\" withItems : # invoke whalesay once for each item in parallel - hello world # item 1 - goodbye world # item 2 - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] withItems more complex example \u00b6 If we'd like to pass more than one piece of information in each workflow, you can instead use a JSON object for each entry in withItems and then address the elements by key, as shown in this example. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops-maps- spec : entrypoint : loop-map-example templates : - name : loop-map-example # parameter specifies the list to iterate over steps : - - name : test-linux template : cat-os-release arguments : parameters : - name : image value : \"{{item.image}}\" - name : tag value : \"{{item.tag}}\" withItems : - { image : 'debian' , tag : '9.1' } #item set 1 - { image : 'debian' , tag : '8.9' } #item set 2 - { image : 'alpine' , tag : '3.6' } #item set 3 - { image : 'ubuntu' , tag : '17.10' } #item set 4 - name : cat-os-release inputs : parameters : - name : image - name : tag container : image : \"{{inputs.parameters.image}}:{{inputs.parameters.tag}}\" command : [ cat ] args : [ /etc/os-release ] withParam example \u00b6 This example does exactly the same job as the previous example, but using withParam to pass the information as a JSON array argument, instead of hard-coding it into the template. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops-param-arg- spec : entrypoint : loop-param-arg-example arguments : parameters : - name : os-list # a list of items value : | [ { \"image\": \"debian\", \"tag\": \"9.1\" }, { \"image\": \"debian\", \"tag\": \"8.9\" }, { \"image\": \"alpine\", \"tag\": \"3.6\" }, { \"image\": \"ubuntu\", \"tag\": \"17.10\" } ] templates : - name : loop-param-arg-example inputs : parameters : - name : os-list steps : - - name : test-linux template : cat-os-release arguments : parameters : - name : image value : \"{{item.image}}\" - name : tag value : \"{{item.tag}}\" withParam : \"{{inputs.parameters.os-list}}\" # parameter specifies the list to iterate over # This template is the same as in the previous example - name : cat-os-release inputs : parameters : - name : image - name : tag container : image : \"{{inputs.parameters.image}}:{{inputs.parameters.tag}}\" command : [ cat ] args : [ /etc/os-release ] withParam example from another step in the workflow \u00b6 Finally, the most powerful form of this is to generate that JSON array of objects dynamically in one step, and then pass it to the next step so that the number and values used in the second step are only calculated at runtime. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : loops-param-result- spec : entrypoint : loop-param-result-example templates : - name : loop-param-result-example steps : - - name : generate template : gen-number-list # Iterate over the list of numbers generated by the generate step above - - name : sleep template : sleep-n-sec arguments : parameters : - name : seconds value : \"{{item}}\" withParam : \"{{steps.generate.outputs.result}}\" # Generate a list of numbers in JSON format - name : gen-number-list script : image : python:alpine3.6 command : [ python ] source : | import json import sys json.dump([i for i in range(20, 31)], sys.stdout) - name : sleep-n-sec inputs : parameters : - name : seconds container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done\" ] Accessing the aggregate results of a loop \u00b6 The output of all iterations can be accessed as a JSON array, once the loop is done. The example below shows how you can read it. Please note: the output of each iteration must be a valid JSON . apiVersion : argoproj.io/v1alpha1 kind : WorkflowTemplate metadata : name : loop-test spec : entrypoint : main templates : - name : main steps : - - name : execute-parallel-steps template : print-json-entry arguments : parameters : - name : index value : '{{item}}' withParam : '[1, 2, 3]' - - name : call-access-aggregate-output template : access-aggregate-output arguments : parameters : - name : aggregate-results # If the value of each loop iteration isn't a valid JSON, # you get a JSON parse error: value : '{{steps.execute-parallel-steps.outputs.result}}' - name : print-json-entry inputs : parameters : - name : index # The output must be a valid JSON script : image : alpine:latest command : [ sh ] source : | cat < /tmp/hello_world.txt\" ] # generate the content of hello_world.txt outputs : parameters : - name : hello-param # name of output parameter valueFrom : path : /tmp/hello_world.txt # set the value of hello-param to the contents of this hello-world.txt - name : print-message inputs : parameters : - name : message container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-parameter.outputs.parameters.hello-param}} . result output parameter \u00b6 The result output parameter captures standard output. It is accessible from the outputs map: outputs.result . Only 256 kb of the standard output stream will be captured. Scripts \u00b6 Outputs of a script are assigned to standard output and captured in the result parameter. More details here . Containers \u00b6 Container steps and tasks also have their standard output captured in the result parameter. Given a task , called log-int , result would then be accessible as {{ tasks.log-int.outputs.result }} . If using steps , substitute tasks for steps : {{ steps.log-int.outputs.result }} .","title":"Output Parameters"},{"location":"walk-through/output-parameters/#output-parameters","text":"Output parameters provide a general mechanism to use the result of a step as a parameter (and not just as an artifact). This allows you to use the result from any type of step, not just a script , for conditional tests, loops, and arguments. Output parameters work similarly to script result except that the value of the output parameter is set to the contents of a generated file rather than the contents of stdout . apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : output-parameter- spec : entrypoint : output-parameter templates : - name : output-parameter steps : - - name : generate-parameter template : whalesay - - name : consume-parameter template : print-message arguments : parameters : # Pass the hello-param output from the generate-parameter step as the message input to print-message - name : message value : \"{{steps.generate-parameter.outputs.parameters.hello-param}}\" - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo -n hello world > /tmp/hello_world.txt\" ] # generate the content of hello_world.txt outputs : parameters : - name : hello-param # name of output parameter valueFrom : path : /tmp/hello_world.txt # set the value of hello-param to the contents of this hello-world.txt - name : print-message inputs : parameters : - name : message container : image : docker/whalesay:latest command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-parameter.outputs.parameters.hello-param}} .","title":"Output Parameters"},{"location":"walk-through/output-parameters/#result-output-parameter","text":"The result output parameter captures standard output. It is accessible from the outputs map: outputs.result . Only 256 kb of the standard output stream will be captured.","title":"result output parameter"},{"location":"walk-through/output-parameters/#scripts","text":"Outputs of a script are assigned to standard output and captured in the result parameter. More details here .","title":"Scripts"},{"location":"walk-through/output-parameters/#containers","text":"Container steps and tasks also have their standard output captured in the result parameter. Given a task , called log-int , result would then be accessible as {{ tasks.log-int.outputs.result }} . If using steps , substitute tasks for steps : {{ steps.log-int.outputs.result }} .","title":"Containers"},{"location":"walk-through/parameters/","text":"Parameters \u00b6 Let's look at a slightly more complex workflow spec with parameters. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : # invoke the whalesay template with # \"hello world\" as the argument # to the message parameter entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message # parameter declaration container : # run cowsay with that message input parameter as args image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] This time, the whalesay template takes an input parameter named message that is passed as the args to the cowsay command. In order to reference parameters (e.g., \"{{inputs.parameters.message}}\" ), the parameters must be enclosed in double quotes to escape the curly braces in YAML. The argo CLI provides a convenient way to override parameters used to invoke the entrypoint. For example, the following command would bind the message parameter to \"goodbye world\" instead of the default \"hello world\". argo submit arguments-parameters.yaml -p message = \"goodbye world\" In case of multiple parameters that can be overridden, the argo CLI provides a command to load parameters files in YAML or JSON format. Here is an example of that kind of parameter file: message : goodbye world To run use following command: argo submit arguments-parameters.yaml --parameter-file params.yaml Command-line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the whalesay template called whalesay-caps but you don't want to change the default entrypoint, you can invoke this from the command line as follows: argo submit arguments-parameters.yaml --entrypoint whalesay-caps By using a combination of the --entrypoint and -p parameters, you can call any template in the workflow spec with any parameter that you like. The values set in the spec.arguments.parameters are globally scoped and can be accessed via {{workflow.parameters.parameter_name}} . This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels that are set in the environment of each container, you could have a YAML file similar to this one: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : global-parameters- spec : entrypoint : A arguments : parameters : - name : log-level value : INFO templates : - name : A container : image : containerA env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runA ] - name : B container : image : containerB env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runB ] In this workflow, both steps A and B would have the same log-level set to INFO and can easily be changed between workflow submissions using the -p flag.","title":"Parameters"},{"location":"walk-through/parameters/#parameters","text":"Let's look at a slightly more complex workflow spec with parameters. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : hello-world-parameters- spec : # invoke the whalesay template with # \"hello world\" as the argument # to the message parameter entrypoint : whalesay arguments : parameters : - name : message value : hello world templates : - name : whalesay inputs : parameters : - name : message # parameter declaration container : # run cowsay with that message input parameter as args image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] This time, the whalesay template takes an input parameter named message that is passed as the args to the cowsay command. In order to reference parameters (e.g., \"{{inputs.parameters.message}}\" ), the parameters must be enclosed in double quotes to escape the curly braces in YAML. The argo CLI provides a convenient way to override parameters used to invoke the entrypoint. For example, the following command would bind the message parameter to \"goodbye world\" instead of the default \"hello world\". argo submit arguments-parameters.yaml -p message = \"goodbye world\" In case of multiple parameters that can be overridden, the argo CLI provides a command to load parameters files in YAML or JSON format. Here is an example of that kind of parameter file: message : goodbye world To run use following command: argo submit arguments-parameters.yaml --parameter-file params.yaml Command-line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the whalesay template called whalesay-caps but you don't want to change the default entrypoint, you can invoke this from the command line as follows: argo submit arguments-parameters.yaml --entrypoint whalesay-caps By using a combination of the --entrypoint and -p parameters, you can call any template in the workflow spec with any parameter that you like. The values set in the spec.arguments.parameters are globally scoped and can be accessed via {{workflow.parameters.parameter_name}} . This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels that are set in the environment of each container, you could have a YAML file similar to this one: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : global-parameters- spec : entrypoint : A arguments : parameters : - name : log-level value : INFO templates : - name : A container : image : containerA env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runA ] - name : B container : image : containerB env : - name : LOG_LEVEL value : \"{{workflow.parameters.log-level}}\" command : [ runB ] In this workflow, both steps A and B would have the same log-level set to INFO and can easily be changed between workflow submissions using the -p flag.","title":"Parameters"},{"location":"walk-through/recursion/","text":"Recursion \u00b6 Templates can recursively invoke each other! In this variation of the above coin-flip template, we continue to flip coins until it comes up heads. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip-recursive- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails # keep flipping coins if \"tails\" template : coinflip when : \"{{steps.flip-coin.outputs.result}} == tails\" - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] Here's the result of a couple of runs of coin-flip for comparison. argo get coinflip-recursive-tzcb5 STEP PODNAME MESSAGE \u2714 coinflip-recursive-vhph5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-vhph5-2123890397 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-vhph5-128690560 \u2514\u2500\u25cb tails STEP PODNAME MESSAGE \u2714 coinflip-recursive-tzcb5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-322836820 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1863890320 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1768147140 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-4080411136 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-tzcb5-4080323273 \u2514\u2500\u25cb tails In the first run, the coin immediately comes up heads and we stop. In the second run, the coin comes up tail three times before it finally comes up heads and we stop.","title":"Recursion"},{"location":"walk-through/recursion/#recursion","text":"Templates can recursively invoke each other! In this variation of the above coin-flip template, we continue to flip coins until it comes up heads. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : coinflip-recursive- spec : entrypoint : coinflip templates : - name : coinflip steps : # flip a coin - - name : flip-coin template : flip-coin # evaluate the result in parallel - - name : heads template : heads # call heads template if \"heads\" when : \"{{steps.flip-coin.outputs.result}} == heads\" - name : tails # keep flipping coins if \"tails\" template : coinflip when : \"{{steps.flip-coin.outputs.result}} == tails\" - name : flip-coin script : image : python:alpine3.6 command : [ python ] source : | import random result = \"heads\" if random.randint(0,1) == 0 else \"tails\" print(result) - name : heads container : image : alpine:3.6 command : [ sh , -c ] args : [ \"echo \\\"it was heads\\\"\" ] Here's the result of a couple of runs of coin-flip for comparison. argo get coinflip-recursive-tzcb5 STEP PODNAME MESSAGE \u2714 coinflip-recursive-vhph5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-vhph5-2123890397 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-vhph5-128690560 \u2514\u2500\u25cb tails STEP PODNAME MESSAGE \u2714 coinflip-recursive-tzcb5 \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-322836820 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1863890320 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-1768147140 \u2514\u2500\u252c\u2500\u25cb heads \u2514\u2500\u2714 tails \u251c\u2500\u2500\u2500\u2714 flip-coin coinflip-recursive-tzcb5-4080411136 \u2514\u2500\u252c\u2500\u2714 heads coinflip-recursive-tzcb5-4080323273 \u2514\u2500\u25cb tails In the first run, the coin immediately comes up heads and we stop. In the second run, the coin comes up tail three times before it finally comes up heads and we stop.","title":"Recursion"},{"location":"walk-through/retrying-failed-or-errored-steps/","text":"Retrying Failed or Errored Steps \u00b6 You can specify a retryStrategy that will dictate how failed or errored steps are retried: # This example demonstrates the use of retry back offs apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-backoff- spec : entrypoint : retry-backoff templates : - name : retry-backoff retryStrategy : limit : 10 retryPolicy : \"Always\" backoff : duration : \"1\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" factor : 2 maxDuration : \"1m\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" affinity : nodeAntiAffinity : {} container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] limit is the maximum number of times the container will be retried. retryPolicy specifies if a container will be retried on failure, error, both, or only transient errors (e.g. i/o or TLS handshake timeout). \"Always\" retries on both errors and failures. Also available: OnFailure (default), \" OnError \", and \" OnTransientError \" (available after v3.0.0-rc2). backoff is an exponential back-off nodeAntiAffinity prevents running steps on the same host. Current implementation allows only empty nodeAntiAffinity (i.e. nodeAntiAffinity: {} ) and by default it uses label kubernetes.io/hostname as the selector. Providing an empty retryStrategy (i.e. retryStrategy: {} ) will cause a container to retry until completion.","title":"Retrying Failed or Errored Steps"},{"location":"walk-through/retrying-failed-or-errored-steps/#retrying-failed-or-errored-steps","text":"You can specify a retryStrategy that will dictate how failed or errored steps are retried: # This example demonstrates the use of retry back offs apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : retry-backoff- spec : entrypoint : retry-backoff templates : - name : retry-backoff retryStrategy : limit : 10 retryPolicy : \"Always\" backoff : duration : \"1\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" factor : 2 maxDuration : \"1m\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\", \"1d\" affinity : nodeAntiAffinity : {} container : image : python:alpine3.6 command : [ \"python\" , -c ] # fail with a 66% probability args : [ \"import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)\" ] limit is the maximum number of times the container will be retried. retryPolicy specifies if a container will be retried on failure, error, both, or only transient errors (e.g. i/o or TLS handshake timeout). \"Always\" retries on both errors and failures. Also available: OnFailure (default), \" OnError \", and \" OnTransientError \" (available after v3.0.0-rc2). backoff is an exponential back-off nodeAntiAffinity prevents running steps on the same host. Current implementation allows only empty nodeAntiAffinity (i.e. nodeAntiAffinity: {} ) and by default it uses label kubernetes.io/hostname as the selector. Providing an empty retryStrategy (i.e. retryStrategy: {} ) will cause a container to retry until completion.","title":"Retrying Failed or Errored Steps"},{"location":"walk-through/scripts-and-results/","text":"Scripts And Results \u00b6 Often, we just want a template that executes a script specified as a here-script (also known as a here document ) in the workflow spec. This example shows how to do that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : scripts-bash- spec : entrypoint : bash-script-example templates : - name : bash-script-example steps : - - name : generate template : gen-random-int-bash - - name : print template : print-message arguments : parameters : - name : message value : \"{{steps.generate.outputs.result}}\" # The result of the here-script - name : gen-random-int-bash script : image : debian:9.4 command : [ bash ] source : | # Contents of the here-script cat /dev/urandom | od -N2 -An -i | awk -v f=1 -v r=100 '{printf \"%i\\n\", f + r * $1 / 65536}' - name : gen-random-int-python script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i) - name : gen-random-int-javascript script : image : node:9.1-alpine command : [ node ] source : | var rand = Math.floor(Math.random() * 100); console.log(rand); - name : print-message inputs : parameters : - name : message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo result was: {{inputs.parameters.message}}\" ] The script keyword allows the specification of the script body using the source tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to command , which should be an interpreter that executes the script body. The use of the script feature also assigns the standard output of running the script to a special output parameter named result . This allows you to use the result of running the script itself in the rest of the workflow spec. In this example, the result is simply echoed by the print-message template.","title":"Scripts And Results"},{"location":"walk-through/scripts-and-results/#scripts-and-results","text":"Often, we just want a template that executes a script specified as a here-script (also known as a here document ) in the workflow spec. This example shows how to do that: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : scripts-bash- spec : entrypoint : bash-script-example templates : - name : bash-script-example steps : - - name : generate template : gen-random-int-bash - - name : print template : print-message arguments : parameters : - name : message value : \"{{steps.generate.outputs.result}}\" # The result of the here-script - name : gen-random-int-bash script : image : debian:9.4 command : [ bash ] source : | # Contents of the here-script cat /dev/urandom | od -N2 -An -i | awk -v f=1 -v r=100 '{printf \"%i\\n\", f + r * $1 / 65536}' - name : gen-random-int-python script : image : python:alpine3.6 command : [ python ] source : | import random i = random.randint(1, 100) print(i) - name : gen-random-int-javascript script : image : node:9.1-alpine command : [ node ] source : | var rand = Math.floor(Math.random() * 100); console.log(rand); - name : print-message inputs : parameters : - name : message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo result was: {{inputs.parameters.message}}\" ] The script keyword allows the specification of the script body using the source tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to command , which should be an interpreter that executes the script body. The use of the script feature also assigns the standard output of running the script to a special output parameter named result . This allows you to use the result of running the script itself in the rest of the workflow spec. In this example, the result is simply echoed by the print-message template.","title":"Scripts And Results"},{"location":"walk-through/secrets/","text":"Secrets \u00b6 Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. See the Kubernetes documentation for more information. # To run this example, first create the secret by running: # kubectl create secret generic my-secret --from-literal=mypassword=S00perS3cretPa55word apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : secret-example- spec : entrypoint : whalesay # To access secrets as files, add a volume entry in spec.volumes[] and # then in the container template spec, add a mount using volumeMounts. volumes : - name : my-secret-vol secret : secretName : my-secret # name of an existing k8s secret templates : - name : whalesay container : image : alpine:3.7 command : [ sh , -c ] args : [ ' echo \"secret from env: $MYSECRETPASSWORD\"; echo \"secret from file: `cat /secret/mountpath/mypassword`\" ' ] # To access secrets as environment variables, use the k8s valueFrom and # secretKeyRef constructs. env : - name : MYSECRETPASSWORD # name of env var valueFrom : secretKeyRef : name : my-secret # name of an existing k8s secret key : mypassword # 'key' subcomponent of the secret volumeMounts : - name : my-secret-vol # mount file containing secret at /secret/mountpath mountPath : \"/secret/mountpath\"","title":"Secrets"},{"location":"walk-through/secrets/#secrets","text":"Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. See the Kubernetes documentation for more information. # To run this example, first create the secret by running: # kubectl create secret generic my-secret --from-literal=mypassword=S00perS3cretPa55word apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : secret-example- spec : entrypoint : whalesay # To access secrets as files, add a volume entry in spec.volumes[] and # then in the container template spec, add a mount using volumeMounts. volumes : - name : my-secret-vol secret : secretName : my-secret # name of an existing k8s secret templates : - name : whalesay container : image : alpine:3.7 command : [ sh , -c ] args : [ ' echo \"secret from env: $MYSECRETPASSWORD\"; echo \"secret from file: `cat /secret/mountpath/mypassword`\" ' ] # To access secrets as environment variables, use the k8s valueFrom and # secretKeyRef constructs. env : - name : MYSECRETPASSWORD # name of env var valueFrom : secretKeyRef : name : my-secret # name of an existing k8s secret key : mypassword # 'key' subcomponent of the secret volumeMounts : - name : my-secret-vol # mount file containing secret at /secret/mountpath mountPath : \"/secret/mountpath\"","title":"Secrets"},{"location":"walk-through/sidecars/","text":"Sidecars \u00b6 A sidecar is another container that executes concurrently in the same pod as the main container and is useful in creating multi-container pods. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-nginx- spec : entrypoint : sidecar-nginx-example templates : - name : sidecar-nginx-example container : image : appropriate/curl command : [ sh , -c ] # Try to read from nginx web server until it comes up args : [ \"until `curl -G 'http://127.0.0.1/' >& /tmp/out`; do echo sleep && sleep 1; done && cat /tmp/out\" ] # Create a simple nginx web server sidecars : - name : nginx image : nginx:1.13 command : [ nginx , -g , daemon off; ] In the above example, we create a sidecar container that runs Nginx as a simple web server. The order in which containers come up is random, so in this example the main container polls the Nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems: always wait for any services you need to come up before running your main code.","title":"Sidecars"},{"location":"walk-through/sidecars/#sidecars","text":"A sidecar is another container that executes concurrently in the same pod as the main container and is useful in creating multi-container pods. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : sidecar-nginx- spec : entrypoint : sidecar-nginx-example templates : - name : sidecar-nginx-example container : image : appropriate/curl command : [ sh , -c ] # Try to read from nginx web server until it comes up args : [ \"until `curl -G 'http://127.0.0.1/' >& /tmp/out`; do echo sleep && sleep 1; done && cat /tmp/out\" ] # Create a simple nginx web server sidecars : - name : nginx image : nginx:1.13 command : [ nginx , -g , daemon off; ] In the above example, we create a sidecar container that runs Nginx as a simple web server. The order in which containers come up is random, so in this example the main container polls the Nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems: always wait for any services you need to come up before running your main code.","title":"Sidecars"},{"location":"walk-through/steps/","text":"Steps \u00b6 In this example, we'll see how to create multi-step workflows, how to define more than one template in a workflow spec, and how to create nested workflows. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello-hello-hello # This spec contains two templates: hello-hello-hello and whalesay templates : - name : hello-hello-hello # Instead of just running a container # This template has a sequence of steps steps : - - name : hello1 # hello1 is run before the following steps template : whalesay arguments : parameters : - name : message value : \"hello1\" - - name : hello2a # double dash => run after previous step template : whalesay arguments : parameters : - name : message value : \"hello2a\" - name : hello2b # single dash => run in parallel with previous step template : whalesay arguments : parameters : - name : message value : \"hello2b\" # This is the same template as from the previous example - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The above workflow spec prints three different flavors of \"hello\". The hello-hello-hello template consists of three steps . The first step named hello1 will be run in sequence whereas the next two steps named hello2a and hello2b will be run in parallel with each other. Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named hello2a and hello2b ran in parallel with each other. STEP TEMPLATE PODNAME DURATION MESSAGE \u2714 steps-z2zdn hello-hello-hello \u251c\u2500\u2500\u2500\u2714 hello1 whalesay steps-z2zdn-27420706 2s \u2514\u2500\u252c\u2500\u2714 hello2a whalesay steps-z2zdn-2006760091 3s \u2514\u2500\u2714 hello2b whalesay steps-z2zdn-2023537710 3s","title":"Steps"},{"location":"walk-through/steps/#steps","text":"In this example, we'll see how to create multi-step workflows, how to define more than one template in a workflow spec, and how to create nested workflows. Be sure to read the comments as they provide useful explanations. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : steps- spec : entrypoint : hello-hello-hello # This spec contains two templates: hello-hello-hello and whalesay templates : - name : hello-hello-hello # Instead of just running a container # This template has a sequence of steps steps : - - name : hello1 # hello1 is run before the following steps template : whalesay arguments : parameters : - name : message value : \"hello1\" - - name : hello2a # double dash => run after previous step template : whalesay arguments : parameters : - name : message value : \"hello2a\" - name : hello2b # single dash => run in parallel with previous step template : whalesay arguments : parameters : - name : message value : \"hello2b\" # This is the same template as from the previous example - name : whalesay inputs : parameters : - name : message container : image : docker/whalesay command : [ cowsay ] args : [ \"{{inputs.parameters.message}}\" ] The above workflow spec prints three different flavors of \"hello\". The hello-hello-hello template consists of three steps . The first step named hello1 will be run in sequence whereas the next two steps named hello2a and hello2b will be run in parallel with each other. Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named hello2a and hello2b ran in parallel with each other. STEP TEMPLATE PODNAME DURATION MESSAGE \u2714 steps-z2zdn hello-hello-hello \u251c\u2500\u2500\u2500\u2714 hello1 whalesay steps-z2zdn-27420706 2s \u2514\u2500\u252c\u2500\u2714 hello2a whalesay steps-z2zdn-2006760091 3s \u2514\u2500\u2714 hello2b whalesay steps-z2zdn-2023537710 3s","title":"Steps"},{"location":"walk-through/suspending/","text":"Suspending \u00b6 Workflows can be suspended by argo suspend WORKFLOW Or by specifying a suspend step on the workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : suspend-template- spec : entrypoint : suspend templates : - name : suspend steps : - - name : build template : whalesay - - name : approve template : approve - - name : delay template : delay - - name : release template : whalesay - name : approve suspend : {} - name : delay suspend : duration : \"20\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] Once suspended, a Workflow will not schedule any new steps until it is resumed. It can be resumed manually by argo resume WORKFLOW Or automatically with a duration limit as the example above.","title":"Suspending"},{"location":"walk-through/suspending/#suspending","text":"Workflows can be suspended by argo suspend WORKFLOW Or by specifying a suspend step on the workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : suspend-template- spec : entrypoint : suspend templates : - name : suspend steps : - - name : build template : whalesay - - name : approve template : approve - - name : delay template : delay - - name : release template : whalesay - name : approve suspend : {} - name : delay suspend : duration : \"20\" # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: \"2m\", \"6h\" - name : whalesay container : image : docker/whalesay command : [ cowsay ] args : [ \"hello world\" ] Once suspended, a Workflow will not schedule any new steps until it is resumed. It can be resumed manually by argo resume WORKFLOW Or automatically with a duration limit as the example above.","title":"Suspending"},{"location":"walk-through/the-structure-of-workflow-specs/","text":"The Structure of Workflow Specs \u00b6 We now know enough about the basic components of a workflow spec. To review its basic structure: Kubernetes header including meta-data Spec body Entrypoint invocation with optional arguments List of template definitions For each template definition Name of the template Optionally a list of inputs Optionally a list of outputs Container invocation (leaf template) or a list of steps For each step, a template invocation To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template. Note that the container section of the workflow spec will accept the same options as the container section of a pod spec, including but not limited to environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes.","title":"The Structure of Workflow Specs"},{"location":"walk-through/the-structure-of-workflow-specs/#the-structure-of-workflow-specs","text":"We now know enough about the basic components of a workflow spec. To review its basic structure: Kubernetes header including meta-data Spec body Entrypoint invocation with optional arguments List of template definitions For each template definition Name of the template Optionally a list of inputs Optionally a list of outputs Container invocation (leaf template) or a list of steps For each step, a template invocation To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template. Note that the container section of the workflow spec will accept the same options as the container section of a pod spec, including but not limited to environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes.","title":"The Structure of Workflow Specs"},{"location":"walk-through/timeouts/","text":"Timeouts \u00b6 You can use the field activeDeadlineSeconds to limit the elapsed time for a workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : activeDeadlineSeconds : 10 # terminate workflow after 10 seconds entrypoint : sleep templates : - name : sleep container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ] You can limit the elapsed time for a specific template as well: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : entrypoint : sleep templates : - name : sleep activeDeadlineSeconds : 10 # terminate container template after 10 seconds container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ]","title":"Timeouts"},{"location":"walk-through/timeouts/#timeouts","text":"You can use the field activeDeadlineSeconds to limit the elapsed time for a workflow: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : activeDeadlineSeconds : 10 # terminate workflow after 10 seconds entrypoint : sleep templates : - name : sleep container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ] You can limit the elapsed time for a specific template as well: apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : timeouts- spec : entrypoint : sleep templates : - name : sleep activeDeadlineSeconds : 10 # terminate container template after 10 seconds container : image : alpine:latest command : [ sh , -c ] args : [ \"echo sleeping for 1m; sleep 60; echo done\" ]","title":"Timeouts"},{"location":"walk-through/volumes/","text":"Volumes \u00b6 The following example dynamically creates a volume and then uses the volume in a two step workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-pvc- spec : entrypoint : volumes-pvc-example volumeClaimTemplates : # define volume, same syntax as k8s Pod spec - metadata : name : workdir # name of volume claim spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi # Gi => 1024 * 1024 * 1024 templates : - name : volumes-pvc-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol Volumes are a very useful way to move large amounts of data from one step in a workflow to another. Depending on the system, some volumes may be accessible concurrently from multiple steps. In some cases, you want to access an already existing volume rather than creating/destroying one dynamically. # Define Kubernetes PVC kind : PersistentVolumeClaim apiVersion : v1 metadata : name : my-existing-volume spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-existing- spec : entrypoint : volumes-existing-example volumes : # Pass my-existing-volume as an argument to the volumes-existing-example template # Same syntax as k8s Pod spec - name : workdir persistentVolumeClaim : claimName : my-existing-volume templates : - name : volumes-existing-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol It's also possible to declare existing volumes at the template level, instead of the workflow level. Workflows can generate volumes using a resource step. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : template-level-volume- spec : entrypoint : generate-and-use-volume templates : - name : generate-and-use-volume steps : - - name : generate-volume template : generate-volume arguments : parameters : - name : pvc-size # In a real-world example, this could be generated by a previous workflow step. value : '1Gi' - - name : generate template : whalesay arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - - name : print template : print-message arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - name : generate-volume inputs : parameters : - name : pvc-size resource : action : create setOwnerReference : true manifest : | apiVersion: v1 kind: PersistentVolumeClaim metadata: generateName: pvc-example- spec: accessModes: ['ReadWriteOnce', 'ReadOnlyMany'] resources: requests: storage: '{{inputs.parameters.pvc-size}}' outputs : parameters : - name : pvc-name valueFrom : jsonPath : '{.metadata.name}' - name : whalesay inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol","title":"Volumes"},{"location":"walk-through/volumes/#volumes","text":"The following example dynamically creates a volume and then uses the volume in a two step workflow. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-pvc- spec : entrypoint : volumes-pvc-example volumeClaimTemplates : # define volume, same syntax as k8s Pod spec - metadata : name : workdir # name of volume claim spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi # Gi => 1024 * 1024 * 1024 templates : - name : volumes-pvc-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] # Mount workdir volume at /mnt/vol before invoking docker/whalesay volumeMounts : # same syntax as k8s Pod spec - name : workdir mountPath : /mnt/vol Volumes are a very useful way to move large amounts of data from one step in a workflow to another. Depending on the system, some volumes may be accessible concurrently from multiple steps. In some cases, you want to access an already existing volume rather than creating/destroying one dynamically. # Define Kubernetes PVC kind : PersistentVolumeClaim apiVersion : v1 metadata : name : my-existing-volume spec : accessModes : [ \"ReadWriteOnce\" ] resources : requests : storage : 1Gi --- apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : volumes-existing- spec : entrypoint : volumes-existing-example volumes : # Pass my-existing-volume as an argument to the volumes-existing-example template # Same syntax as k8s Pod spec - name : workdir persistentVolumeClaim : claimName : my-existing-volume templates : - name : volumes-existing-example steps : - - name : generate template : whalesay - - name : print template : print-message - name : whalesay container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol It's also possible to declare existing volumes at the template level, instead of the workflow level. Workflows can generate volumes using a resource step. apiVersion : argoproj.io/v1alpha1 kind : Workflow metadata : generateName : template-level-volume- spec : entrypoint : generate-and-use-volume templates : - name : generate-and-use-volume steps : - - name : generate-volume template : generate-volume arguments : parameters : - name : pvc-size # In a real-world example, this could be generated by a previous workflow step. value : '1Gi' - - name : generate template : whalesay arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - - name : print template : print-message arguments : parameters : - name : pvc-name value : '{{steps.generate-volume.outputs.parameters.pvc-name}}' - name : generate-volume inputs : parameters : - name : pvc-size resource : action : create setOwnerReference : true manifest : | apiVersion: v1 kind: PersistentVolumeClaim metadata: generateName: pvc-example- spec: accessModes: ['ReadWriteOnce', 'ReadOnlyMany'] resources: requests: storage: '{{inputs.parameters.pvc-size}}' outputs : parameters : - name : pvc-name valueFrom : jsonPath : '{.metadata.name}' - name : whalesay inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : docker/whalesay:latest command : [ sh , -c ] args : [ \"echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol - name : print-message inputs : parameters : - name : pvc-name volumes : - name : workdir persistentVolumeClaim : claimName : '{{inputs.parameters.pvc-name}}' container : image : alpine:latest command : [ sh , -c ] args : [ \"echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt\" ] volumeMounts : - name : workdir mountPath : /mnt/vol","title":"Volumes"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 61628f0b8085..b8e5de9b28a7 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,942 +2,942 @@ None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily None - 2023-12-10 + 2023-12-11 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 222df5c71820c9ece23a908bbe27df87c6c89693..75fb9821dbf9b45c5a8c06c6c7da84635cf03e26 100644 GIT binary patch delta 29 lcmZ3%w1SCUzMF&NkYD*k_W6uQCoZq#_#asP%QA+60RWg>3atPD delta 29 lcmZ3%w1SCUzMF$X@>S_X_W6uICoZq#*mN*O)G~&F0RWWY3Hbm3