diff --git a/frinx-workflow-manager/blueprints/blueprints_create.gif b/frinx-device-inventory/blueprints/blueprints_create.gif similarity index 100% rename from frinx-workflow-manager/blueprints/blueprints_create.gif rename to frinx-device-inventory/blueprints/blueprints_create.gif diff --git a/frinx-workflow-manager/blueprints/blueprints_use.gif b/frinx-device-inventory/blueprints/blueprints_use.gif similarity index 100% rename from frinx-workflow-manager/blueprints/blueprints_use.gif rename to frinx-device-inventory/blueprints/blueprints_use.gif diff --git a/frinx-workflow-manager/blueprints/readme.md b/frinx-device-inventory/blueprints/readme.md similarity index 100% rename from frinx-workflow-manager/blueprints/readme.md rename to frinx-device-inventory/blueprints/readme.md diff --git a/frinx-device-inventory/index.yaml b/frinx-device-inventory/index.yaml new file mode 100644 index 000000000..556482ad5 --- /dev/null +++ b/frinx-device-inventory/index.yaml @@ -0,0 +1,3 @@ +Label: Device Inventory +icon: codespaces +order: 1500 \ No newline at end of file diff --git a/frinx-workflow-manager/inventory/fm_install.gif b/frinx-device-inventory/inventory/fm_install.gif similarity index 100% rename from frinx-workflow-manager/inventory/fm_install.gif rename to frinx-device-inventory/inventory/fm_install.gif diff --git a/frinx-workflow-manager/inventory/readme.md b/frinx-device-inventory/inventory/readme.md similarity index 100% rename from frinx-workflow-manager/inventory/readme.md rename to frinx-device-inventory/inventory/readme.md diff --git a/frinx-machine/api-docs/api-docs.png b/frinx-machine/api-docs/api-docs.png new file mode 100644 index 000000000..646575a87 Binary files /dev/null and b/frinx-machine/api-docs/api-docs.png differ diff --git a/frinx-machine/api-docs/api-gateway.png b/frinx-machine/api-docs/api-gateway.png new file mode 100644 index 000000000..656831e4f Binary files /dev/null and b/frinx-machine/api-docs/api-gateway.png differ diff --git a/frinx-machine/api-docs/index.md b/frinx-machine/api-docs/index.md new file mode 100644 index 000000000..36e731815 --- /dev/null +++ b/frinx-machine/api-docs/index.md @@ -0,0 +1,24 @@ +--- +icon: plug +expanded: false +order: 9300 +--- + +# API Gateway + +Communication with FRINX Machine is facilitated through both a user-friendly UI and a robust REST API. +All our services offer REST and GraphQL APIs, allowing seamless interaction for both users and automated systems. + +Our architecture employs an API Gateway to consolidate all endpoints into a single, accessible location. Each service is assigned a unique path, simplifying access. +To connect with a specific service, you only need to know the FM KrakenD/OAuth2-Proxy ingress host and the designated path for your desired service. +This streamlined approach ensures efficient and straightforward communication with FRINX Machine. + +## Api Gateway diagram + +![API Gateway](api-gateway.png) + +## Api Docs + +API documentation is accesibla via Frinx Machine installation + +![API Docs](api-docs.png) diff --git a/frinx-machine/azure-ad/azure_api_permissions.png b/frinx-machine/azure-ad/azure_api_permissions.png deleted file mode 100644 index a970e785b..000000000 Binary files a/frinx-machine/azure-ad/azure_api_permissions.png and /dev/null differ diff --git a/frinx-machine/azure-ad/azure_client_secret.png b/frinx-machine/azure-ad/azure_client_secret.png deleted file mode 100644 index 67b614c88..000000000 Binary files a/frinx-machine/azure-ad/azure_client_secret.png and /dev/null differ diff --git a/frinx-machine/azure-ad/azure_tenant.png b/frinx-machine/azure-ad/azure_tenant.png deleted file mode 100644 index e7c08c69a..000000000 Binary files a/frinx-machine/azure-ad/azure_tenant.png and /dev/null differ diff --git a/frinx-machine/azure-ad/azure_token_configuration.png b/frinx-machine/azure-ad/azure_token_configuration.png deleted file mode 100644 index d4c66b74a..000000000 Binary files a/frinx-machine/azure-ad/azure_token_configuration.png and /dev/null differ diff --git a/frinx-machine/azure-ad/readme.md b/frinx-machine/azure-ad/readme.md deleted file mode 100644 index 5b6b60988..000000000 --- a/frinx-machine/azure-ad/readme.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -label: Frinx Machine with Azure AD -icon: key -order: 1500 ---- - -# Frinx Machine with Azure AD - -Frinx Machine supports authentification and authorization via Azure AD. -The following sections describe how to set up Azure AD for Frinx Machine. - -## Client configuration - -Register the application in your Azure AD and configure the following settings. - - -### Redirect URIs - -Set platform redirect URIs on the Authentication page. The table below shows examples of configuration settings. - -| Syntax | Platform configuration | Redirect URI | -| --- | --- | --- | -| Frontend Login | Single-page application | https://<**IP/DNS**>/ ,e.g. https://localhost/ | -| Workflow Manager docs (swager) | Web | https://<**IP/DNS**>/oauth2-redirect.html | -| Cloud swagger | Web | https://editor.swagger.io/oauth2-redirect.html | -| Local Postman | Web | https://oauth.pstmn.io/v1/callback | -| Cloud Postman | Web | https://getpostman.com/oauth2/callback | - -**Frontent login URI** is passed to the installation script `azure_ad.sh` via `--redirect_url` flag. - - -### Implicit flow and single/multi-tenancy settings -On the same page choose single/multi-tenancy. Based on this setting the parameter `--tenant_name` is defined in the installation script `azure_ad.sh`. - -For a single-tenant, use Azure AD domain name from AD overview. For multi-tenant use value `common`. -Enabled implicit flow is optional based on specific requirements. - -![Token config](azure_tenant.png) - -### API permissions - -![Client API permissions](azure_api_permissions.png) - -### Client secrets - -Generate secret and use it as an input parameter for `--client_secret` flag in the installation script `azure_ad.sh`. -This secret is used in KrakenD azure plugin for translating group id to the group name (human-readable format). - -![Azure client secrets](azure_client_secret.png) - - -### Token claims configuration - -![Token claims configuration](azure_token_configuration.png) - -Example of encoded JWT token with claims. These claims are transferred to the request header (see [KrakenD Azure Plugin docs](https://github.com/FRINXio/krakend-azure-plugin) for more info). - -``` json -{ - ...... - "tid": "aaaaaaaa-1234-5678-abcd-abcd12345678", - "name": "FRINX Super Administrator (Test)", - "oid": "d040c2a8-aaaa-bbbb-cccc-f2900fea4f51", - "preferred_username": "frinx-user@yourname.onmicrosoft.com", - "roles": [ - "User.ReadWrite" - ], - "groups": [ - "bbbbbbbb-cccc-1234-5678-abcd12345678" - ], - ...... -} -``` - -## RBAC configuration - -Super user is defined in .env file via **ADMIN_GROUP** variable. - -### Workflow Manager - -RBAC proxy adds 2 features on top of tenant proxy: -* Ensures user authorization to access certain endpoints -* Filters workflow definitions and workflow executions based on user's roles, groups and userID - -RBAC support simply distinguishes 2 user types: an admin and everyone else. -An admin has full access to workflow API while the ordinary user can only: -* Read workflow definitions - * Ordinary users can only view workflow definitions belonging to the same groups - * A workflow definition (created by an admin) can have multiple labels assigned - * A user can belong into multiple groups - * User groups are identified in HTTP request's header field `x-auth-user-roles` - * If an ordinary user's group matches one of the workflow labels, the workflow becomes visible to the user -* Execute visible workflow definitions -* Monitor running executions - * Only those executed by the user currently logged in - -Define user roles in workflow by adding role or group name to description label. - -Example: added User.ReadWrite, Role.ReadWrite, Group.ReadWrite labels to workflow description. - -``` json -{ - "name": "Install_all_from_inventory", - "description": "{\"description\": \"Install all devices from device inventory\", \"labels\": [\"User.ReadWrite\", \"Role.ReadWrite\", \"Group.ReadWrite\"]}", - "version": 1, - "tasks": [ - ...... -``` - -### Uniconfig - -Super-users (based on their role and user groups) can use all REST APIs. -Regular users will only be able to use GET REST API requests. - -| Role | READ (GET REQUEST) | WRITE (ALL REQUEST) | -| --- | --- | --- | -|Admin (Superuser) | true | true | -|Regular user | true | false | - -### Resource Manager - -A simple RBAC model is implemented where only super-users (based on their role and user groups) can manipulate resource types, resource pools and labels. Regular users will only be able to read the above entities, allocate and free resources. - -|Role | READ | WRITE | -| --- | --- | --- | -|Admin (Superuser) | true | true | -|Regular user | true | false | diff --git a/frinx-machine/getting-started/components.png b/frinx-machine/getting-started/components.png deleted file mode 100644 index 53199b3ca..000000000 Binary files a/frinx-machine/getting-started/components.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user23.png b/frinx-machine/getting-started/conductor_user23.png deleted file mode 100644 index a2413e9a9..000000000 Binary files a/frinx-machine/getting-started/conductor_user23.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user24.png b/frinx-machine/getting-started/conductor_user24.png deleted file mode 100644 index 1bfdb9ba3..000000000 Binary files a/frinx-machine/getting-started/conductor_user24.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user25.png b/frinx-machine/getting-started/conductor_user25.png deleted file mode 100644 index 9cb9ee64c..000000000 Binary files a/frinx-machine/getting-started/conductor_user25.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user26.png b/frinx-machine/getting-started/conductor_user26.png deleted file mode 100644 index 5f4a43230..000000000 Binary files a/frinx-machine/getting-started/conductor_user26.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user27.png b/frinx-machine/getting-started/conductor_user27.png deleted file mode 100644 index 0a33af77c..000000000 Binary files a/frinx-machine/getting-started/conductor_user27.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user28.png b/frinx-machine/getting-started/conductor_user28.png deleted file mode 100644 index fe6cbccdc..000000000 Binary files a/frinx-machine/getting-started/conductor_user28.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user29.png b/frinx-machine/getting-started/conductor_user29.png deleted file mode 100644 index 1eb82f0c1..000000000 Binary files a/frinx-machine/getting-started/conductor_user29.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user30.png b/frinx-machine/getting-started/conductor_user30.png deleted file mode 100644 index 6b9080921..000000000 Binary files a/frinx-machine/getting-started/conductor_user30.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user31.png b/frinx-machine/getting-started/conductor_user31.png deleted file mode 100644 index 7745da52c..000000000 Binary files a/frinx-machine/getting-started/conductor_user31.png and /dev/null differ diff --git a/frinx-machine/getting-started/conductor_user32.png b/frinx-machine/getting-started/conductor_user32.png deleted file mode 100644 index 38049e0f6..000000000 Binary files a/frinx-machine/getting-started/conductor_user32.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_0.png b/frinx-machine/getting-started/image_0.png deleted file mode 100644 index c757754b5..000000000 Binary files a/frinx-machine/getting-started/image_0.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_1.png b/frinx-machine/getting-started/image_1.png deleted file mode 100644 index 71523ac2f..000000000 Binary files a/frinx-machine/getting-started/image_1.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_10.png b/frinx-machine/getting-started/image_10.png deleted file mode 100644 index 7e5446244..000000000 Binary files a/frinx-machine/getting-started/image_10.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_12.png b/frinx-machine/getting-started/image_12.png deleted file mode 100644 index 0e7f4769d..000000000 Binary files a/frinx-machine/getting-started/image_12.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_13.png b/frinx-machine/getting-started/image_13.png deleted file mode 100644 index 947097280..000000000 Binary files a/frinx-machine/getting-started/image_13.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_14.png b/frinx-machine/getting-started/image_14.png deleted file mode 100644 index c86f60f02..000000000 Binary files a/frinx-machine/getting-started/image_14.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_15.png b/frinx-machine/getting-started/image_15.png deleted file mode 100644 index 51bf0269e..000000000 Binary files a/frinx-machine/getting-started/image_15.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_16.png b/frinx-machine/getting-started/image_16.png deleted file mode 100644 index 907b63a2b..000000000 Binary files a/frinx-machine/getting-started/image_16.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_17.png b/frinx-machine/getting-started/image_17.png deleted file mode 100644 index da1edb78e..000000000 Binary files a/frinx-machine/getting-started/image_17.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_18.png b/frinx-machine/getting-started/image_18.png deleted file mode 100644 index 0dd13a124..000000000 Binary files a/frinx-machine/getting-started/image_18.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_19.png b/frinx-machine/getting-started/image_19.png deleted file mode 100644 index c8b7376f0..000000000 Binary files a/frinx-machine/getting-started/image_19.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_1_0.png b/frinx-machine/getting-started/image_1_0.png deleted file mode 100644 index e3302442f..000000000 Binary files a/frinx-machine/getting-started/image_1_0.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_2.png b/frinx-machine/getting-started/image_2.png deleted file mode 100644 index 0d5a69199..000000000 Binary files a/frinx-machine/getting-started/image_2.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_20.png b/frinx-machine/getting-started/image_20.png deleted file mode 100644 index a4e09c353..000000000 Binary files a/frinx-machine/getting-started/image_20.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_21.png b/frinx-machine/getting-started/image_21.png deleted file mode 100644 index 27460e200..000000000 Binary files a/frinx-machine/getting-started/image_21.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_22.png b/frinx-machine/getting-started/image_22.png deleted file mode 100644 index 598c0b297..000000000 Binary files a/frinx-machine/getting-started/image_22.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_3.png b/frinx-machine/getting-started/image_3.png deleted file mode 100644 index bc7fba64b..000000000 Binary files a/frinx-machine/getting-started/image_3.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_4.png b/frinx-machine/getting-started/image_4.png deleted file mode 100644 index 88e72dde2..000000000 Binary files a/frinx-machine/getting-started/image_4.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_5.png b/frinx-machine/getting-started/image_5.png deleted file mode 100644 index ee2482109..000000000 Binary files a/frinx-machine/getting-started/image_5.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_6.png b/frinx-machine/getting-started/image_6.png deleted file mode 100644 index 0797fa1ac..000000000 Binary files a/frinx-machine/getting-started/image_6.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_7.png b/frinx-machine/getting-started/image_7.png deleted file mode 100644 index 7e86616c1..000000000 Binary files a/frinx-machine/getting-started/image_7.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_8.png b/frinx-machine/getting-started/image_8.png deleted file mode 100644 index 9f7d294c4..000000000 Binary files a/frinx-machine/getting-started/image_8.png and /dev/null differ diff --git a/frinx-machine/getting-started/image_9.png b/frinx-machine/getting-started/image_9.png deleted file mode 100644 index e702d558c..000000000 Binary files a/frinx-machine/getting-started/image_9.png and /dev/null differ diff --git a/frinx-machine/getting-started/introduction.rst-DEL b/frinx-machine/getting-started/introduction.rst-DEL deleted file mode 100644 index 27c2dc491..000000000 --- a/frinx-machine/getting-started/introduction.rst-DEL +++ /dev/null @@ -1,102 +0,0 @@ -Introduction -============ - - -FRINX Opendaylight (network automation solution) ------------------------------------------------- - -* Connects to the devices in network - -* Keeps connections between devices alive - -* Pushes configuration data to devices - -* Pulls configuration and operational data from devices - - -Netflix Conductor (workflow engine) ------------------------------------ - -* Chains atomic tasks into complex workflows - -* Defines, executes and monitors workflows (via REST or UI) - - -Elasticsearch (inventory and log data) --------------------------------------- - -* Storing inventory data - -* Storing log data - -**The goal is to provide a platform enabling easy definition, execution and -monitoring of complex workflows using FRINX Opendaylight.** - -An example workflow could consist of: - -#. Pulling device IP and mgmt credentials from an external IPAM system - -#. Mounting a device - -#. Verifying the device is connected - -#. Executing a configuration template - -#. Unmounting a device - -We chose Netflix’s conductor workflow engine since it has been proven to be -highly scalable open-source technology that integrates very well with FRINX -Opendaylight. Further information about conductor can be found at: - - -* **Github:** `https://github.com/Netflix/conductor `_ - -* **Docs:** `https://netflix.github.io/conductor `_ - - -High Level Architecture ------------------------ - -Following diagram outlines main functional components in the FRINX Machine solution: - - -.. image:: image_0.png - :target: image_0.png - :alt: preview1 - - -The following diagram outlines the container components of the FRINX Machine solution: - - -.. image:: image_1_0.png - :target: image_1_0.png - :alt: preview - - -FRINX Machine repository is available at: -`https://github.com/FRINXio/FRINX-machine -`_ - -Frinx-conductor repository is available at: -`https://github.com/FRINXio/frinx-conductor -`_ - -Specialized ODL tasks are available at: -`https://github.com/FRINXio/FRINX-machine/tree/master/microservices/netinfra_utils -`_ - - -Defining a workflow -------------------- - -Workflows are defined using a JSON based domain specific language (DSL) by -wiring a set of tasks together. The tasks are either control tasks (fork, -conditional etc) or application tasks (e.g. encode a file) that are executed on -a remote machine. - -FRINX Machine distribution comes in with number of pre-packaged workflows. - -Detailed description of workflow and task definitions along with examples can be -found at official `Netflix Conductor documentation -`_ - diff --git a/frinx-machine/getting-started/operating-frinx-machine.rst-DEL b/frinx-machine/getting-started/operating-frinx-machine.rst-DEL deleted file mode 100644 index cfbb355c7..000000000 --- a/frinx-machine/getting-started/operating-frinx-machine.rst-DEL +++ /dev/null @@ -1,111 +0,0 @@ -Operating FRINX Machine -======================= - - -Starting a workflow -------------------- - -Initiate **Workflow UI** - - -#. Open web browser -#. Type localhost:5000 - -Navigate to: - -* Metadata - - * Workflow Defs - -There is a list of all available workflow definitions. -Choose one and switch to tab **Input**: - - -.. image:: image_1.png - :target: image_1.png - :alt: preview1 - - -Input ------ - -Workflows are supplied inputs by client when a new execution is triggered. - -Workflow input is a JSON payload that is available via ``${workflow.input...}`` -expressions. - -Each task in the workflow is given input based on the inputParameters template -configured in workflow definition. - -``inputParameters`` is a JSON fragment with value containing parameters for -mapping values from input or output of a workflow or another task during the -execution. - - -Start workflow --------------- - -Fill in JSON generated input fields. Input fields may contain default value or -description provided in workflow definition. - -Press the button **Execute workflow** in order to start current workflow. - -**Console log** section at the bottom provides status information about workflow -execution. - - -.. image:: image_2.png - :target: image_2.png - :alt: preview2 - - -Executed workflows can be found at **Executions** tab on top of the column in -menu on the left. - - -Inspecting executed workflows ------------------------------ - -Navigate to: - - -* Executions - - * All - -Then, search and filter for specific workflows. - -After clicking on specific workflow, you are able to see its details including -outputs as well as other information about current workflow. - - -.. image:: image_3.png - :target: image_3.png - :alt: preview3 - - -Workflow actions ----------------- - -Workflow actions are available after clicking on specific executed workflow. - -You are able to execute these actions to a specific workflow: - - -* terminate -* rerun -* restart -* retry -* pause -* resume - -Running previously executed workflow as new workflow with same or edited inputs: - -Navigate to **Edit Input** tab, where you are able to edit specific inputs and -run workflow again. - - -.. image:: image_4.png - :target: image_4.png - :alt: preview4 - diff --git a/frinx-machine/getting-started/readme.md b/frinx-machine/getting-started/readme.md index 4749b139c..f3840f927 100644 --- a/frinx-machine/getting-started/readme.md +++ b/frinx-machine/getting-started/readme.md @@ -1,96 +1,100 @@ --- icon: rocket expanded: false -order: 2000 +order: 9999 --- # FRINX Machine introduction -FRINX Machine is a dockerized deployment of multiple elements. The FRINX -Machine enables large-scale automation of network devices, services and -retrieval of operational state data from a network. User-specific -workflows are designed through the use of OpenConfig NETCONF & YANG -models, vendor native models, and the CLI. The FRINX Machine uses -dockerized containers that are designed and tested to work together to -create a user-specific solution. +In today's rapidly evolving digital landscape, efficient network management and automation are crucial for maintaining robust and scalable IT infrastructures. +Frinx Machine emerges as a powerful solution tailored to meet these demands, providing an integrated platform designed to simplify and enhance network automation. -!!! -FRINX-machine can be installed in Kubernetes using the [Helm chart](https://artifacthub.io/packages/helm/frinx-helm-charts/frinx-machine) -!!! +## What is Frinx Machine? + +Frinx Machine is an advanced network automation platform delivering a comprehensive suite of tools for managing and automating network infrastructures. +It is designed to streamline network operations, reduce complexity, and drive efficiency through a unified, scalable, and flexible approach. + +Frinx Machine enabling seamless management of network configurations across multi-vendor environments. +It provides a consistent interface for deploying and managing configurations, reducing the complexities associated with heterogeneous network devices. ## FRINX Machine core components -### FRINX UniConfig -- Connects to the devices in the network -- Retrieves and stores configuration from devices -- Pushes configuration data to devices -- Builds diffs between actual and intended config to execute atomic - configuration changes -- Retrieves operational data from devices -- Manages transactions across one or multiple devices -- Translates between CLI, vendor native, and industry-standard data - models (i.e. OpenConfig) -- Reads and stores vendor native data models from mounted network - devices (i.e YANG models) -- Ensures high availability, reducing network outages and downtime -- Executes commands on multiple devices simultaneously +### High-Level Architecture -### Netflix Conductor (workflow engine) - the core of the Workflow manager +The following diagram outlines the main functional components in the FRINX +Machine solution: -- Atomic tasks are chained together into more complex workflows -- Defines, executes and monitors workflows (via REST or UI) +![FM Architecture](FRINX_Machine_Architecture.png) -We chose Netflix’s conductor workflow engine since it has been proven to -be a highly scalable open-source technology that integrates very well with -FRINX UniConfig. Further information about Conductor can be found at: -- **Sources:** https://github.com/Netflix/conductor -- **FRINXio sources:** https://github.com/FRINXio/conductor-community -- **Docs:** https://conductor-oss.github.io/conductor/index.html +### UniConfig -### Postgres database - the core of Device inventory +- Connects to the devices in the network +- Retrieves and stores configuration from devices +- Pushes configuration data to devices +- Builds diffs between actual and intended config to execute atomic + configuration changes +- Retrieves operational data from devices +- Manages transactions across one or multiple devices +- Translates between CLI, vendor native, and industry-standard data + models (i.e. OpenConfig) +- Reads and stores vendor native data models from mounted network + devices (i.e YANG models) +- Ensures high availability, reducing network outages and downtime +- Executes commands on multiple devices simultaneously -- Stores inventory data +### Workflow manager (Conductor) -### Monitoring software: loki + grafana + influxdb + telegraf - core of Monitoring +- Atomic tasks are chained together into more complex workflows +- Defines, executes and monitors workflows (via REST or UI) +- Schedule workflow executions -- Stores workflow execution and metadata -- Stores UniConfig logs -- Stores docker container logs +We chose Netflix’s conductor workflow engine since it has been proven to +be a highly scalable open-source technology that integrates very well with +FRINX UniConfig. Further information about Conductor can be found at: -### UniConfig UI (user interface) aka Frinx frontend +- **FRINXio conductor sources:** https://github.com/FRINXio/conductor +- **FRINXio conductor community sources:** https://github.com/FRINXio/conductor-community +- **Sources:** https://github.com/conductor-oss/conductor +- **Docs:** https://conductor-oss.github.io/conductor/index.html -- This is the primary user interface for the FRINX Machine -- Allows users to create, edit or run workflows and monitor any open - tasks -- Allows users to mount devices and view their status. The UI allows - users to execute UniConfig operations such as read, edit, and - commit. Configurations can be pushed to or synced from the network -- Device inventory, workflow execution, and resource manager are - all accessible through the UI +### Device inventory -## High-Level Architecture +- Store all important devices information on one place +- Maintain devices in all deployment zones from one place +- Notify about changes in inventory via Kafka -The following diagram outlines the main functional components in the FRINX -Machine solution: +### Topology discovery -![FM Architecture](FRINX_Machine_Architecture.png) +- Acquires information from live network + - Relying on UniConfig as its primary source of network information +- Uses that information to build topology view(s): + - Across multiple layers of the network + - e.g. LLDP data to build physical topology view or routing data to build L3 view +- Performs reconciliation across the layers in order to provide a unified topology view +- Provides an API to query topologies +- Provides Kafka notifications about changes in the topology +- Consumes and stores device metadata events from Kafka topic + + +### Performance monitor -## Defining a workflow +- Collects performance metrics about devices in a time-series based database + - Relies on UniConfig as a producer of the performance metrics of devices. + - Relies on Device Inventory as a producer of information about devices, such as device name, vendor, model, and version. +- Unifies performance metrics and produces them to a Kafka broker. +- Provides an API to query performance metrics of devices. -The workflows are defined using a JSON-based domain-specific language -(DSL) by wiring a set of tasks together. The tasks are either control -tasks (fork, conditional, etc.) or application tasks (i.e. encoding a -file) that are executed on a remote device. +## Monitoring: -The FRINX Machine distribution comes pre-loaded with several -standardized workflows +### Loki + Grafana + Influxdb + Telegraf - core of Monitoring -A detailed description of how to run workflows and tasks, along with -examples can be found in the official [Netflix Conductor -documentation](https://conductor-oss.github.io/conductor/documentation/configuration/workflowdef/index.html) +- Loki: Efficient log aggregation and querying. +- Telegraf: Data collection and reporting agent. +- InfluxDB: High-performance time-series database. +- Grafana: Powerful visualization and dashboard creation. -## Operating FRINX Machine +### Key Functions: -To find out more about how to run the pre-packaged workflows, continue to [!ref icon="briefcase" text="Use cases"](../use-cases/index.md) +- Collect logs and metrics from services to provide platform observability diff --git a/frinx-machine/index.yml b/frinx-machine/index.yml index d68bb36f7..f47cee9ee 100644 --- a/frinx-machine/index.yml +++ b/frinx-machine/index.yml @@ -1,3 +1,3 @@ Label: Frinx Machine -icon: gear +icon: browser order: 4000 \ No newline at end of file diff --git a/frinx-machine/installation/readme.md b/frinx-machine/installation/basic/readme.md similarity index 90% rename from frinx-machine/installation/readme.md rename to frinx-machine/installation/basic/readme.md index 6502d0462..9ad948251 100644 --- a/frinx-machine/installation/readme.md +++ b/frinx-machine/installation/basic/readme.md @@ -1,10 +1,10 @@ --- -icon: rocket expanded: false -order: 2000 +order: 9999 +label: Helm Installation Guide --- -# FRINX Machine Helm Chart Installation Guide +# Helm Chart Installation Guide This guide provides the step-by-step instructions for installing FRINX Machine on a Kubernetes cluster using Helm charts. @@ -59,16 +59,8 @@ frinx-machine-operators-cloudnative-pg-d9566444c-85w8g 1/1 Running 0 ## Step 4: Create Docker Registry Secret -Create a Docker registry secret for pulling images: +Please complete this step before continue [!ref icon="briefcase" text="Create docker regitry secret"](../docker-registry-secret/readme.md) -For more info about accessing private images, visit [Download Frinx Uniconfig](https://docs.frinx.io/frinx-uniconfig/getting-started/#download-frinx-uniconfig) - -```bash -kubectl create secret -n frinx docker-registry regcred \ - --docker-server="https://index.docker.io/v1/" \ - --docker-username="" \ - --docker-password="" -``` ## Step 5: Install FRINX Machine diff --git a/frinx-machine/installation/custom-worker-deployment/readme.md b/frinx-machine/installation/custom-worker-deployment/readme.md new file mode 100644 index 000000000..76bb3a9e7 --- /dev/null +++ b/frinx-machine/installation/custom-worker-deployment/readme.md @@ -0,0 +1,90 @@ +--- +expanded: false +order: 1995 +label: Custom worker deployment +--- + +# Custom worker deployment + +To deploy a custom worker to Frinx Machine, utilize the generic worker Helm chart. +Select the Helm chart version that corresponds with your application version, ensuring it is aligned with the Frinx Machine version. + +You can find the necessary Helm chart at: [Helm Chart for Frinx Workers](https://artifacthub.io/packages/helm/frinx-helm-charts/worker). + +### Create the Chart Configuration + +Create the Chart.yaml file in new folder: + +```yaml +# Chart.yaml +apiVersion: v2 +name: custom-worker +description: Kubernetes deployment of custom worker +type: application +version: 0.1.0 +maintainers: + - name: FRINX +dependencies: + - condition: custom-worker.enabled + name: worker + alias: custom-worker + repository: https://FRINXio.github.io/helm-charts + version: 4.0.0 + - condition: another-worker.enabled + name: worker + alias: anothers-worker + repository: https://FRINXio.github.io/helm-charts + version: 4.0.0 +``` + +### Customize the Values + +Create the values.yaml file next to Chart.yaml: + +```yaml +# values.yaml +x-frinx-rbac-admin-role: &frinx-rbac-admin-role "FRINXio" + +# -- use dependency alias from Chart.yaml +custom-worker: + enabled: true + + # -- override deployment name + fullnameOverride: "custom-workers" + + image: + # -- use your image name + repository: your/custom-worker + # -- use your image tag + tag: "tag" + + env: + # -- set worker RBAC based on Frinx Machine RBAC settings + # -- yaml anchor can be used to set same value for multiple workers + X_AUTH_USER_GROUP: *frinx-rbac-admin-role + +another-worker: + enabled: true + + fullnameOverride: "another-workers" + + image: + repository: your/another-worker + tag: "tag" + + env: + X_AUTH_USER_GROUP: *frinx-rbac-admin-role + +``` + +### Deploy charts + +``` +helm dependency build +helm upgrade --install -n frinx custom-worker . -f values.yaml +``` + +### Useful Links +[!ref minikube image load](https://minikube.sigs.k8s.io/docs/commands/image/) + +[!ref kind image load](https://minikube.sigs.k8s.io/docs/commands/image/) \ No newline at end of file diff --git a/frinx-machine/installation/customization/create-kind-cluster/readme.md b/frinx-machine/installation/customization/create-kind-cluster/readme.md new file mode 100644 index 000000000..b66092e8c --- /dev/null +++ b/frinx-machine/installation/customization/create-kind-cluster/readme.md @@ -0,0 +1,170 @@ +--- +expanded: false +order: 9999 +label: Cluster installation +--- + +# Setting Up a Kind Cluster with Cilium and NGINX Ingress Controller + +This guide will walk you through the process of deploying a Kubernetes (K8s) cluster using Kind (Kubernetes IN Docker), setting up the Cilium CNI (Container Network Interface), and deploying the NGINX Ingress Controller. + +## Prerequisites + +- `Kind`: Make sure that Kind is installed on your local machine. Follow the Kind installation guide if necessary. +- `Helm`: Make sure that Helm is installed. Follow the Helm installation guide if necessary. +- `Cilium`: Make sure that Cilium system requirements are fullfiled. Follow the [Cilium installation](https://docs.cilium.io/en/stable/operations/system_requirements/#admin-system-reqs) guide if necessary. + + +## Deploy Kind cluster + +Create a Kind configuration file named kind-config.yaml with the following content: + +```yaml +# kind-config.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + kubeadmConfigPatches: + - | + kind: InitConfiguration + nodeRegistration: + kubeletExtraArgs: + node-labels: "ingress-ready=true" + extraPortMappings: + - containerPort: 80 + hostPort: 80 + listenAddress: 127.0.0.1 + protocol: TCP + - containerPort: 443 + hostPort: 443 + listenAddress: 127.0.0.1 + protocol: TCP +- role: worker +- role: worker +- role: worker +networking: + disableDefaultCNI: true + kubeProxyMode: none +``` + +This configuration sets up a Kind cluster with one control-plane node and three worker nodes. It also maps ports 80 and 443 from the host to the control-plane node, making the cluster ready for ingress traffic. + +Deploy the cluster using Kind: + +```bash +kind create cluster --config kind-config.yaml +``` + +Verify the cluster is running: + +```bash +kubectl cluster-info +``` + +You should see output indicating that the Kubernetes control plane and CoreDNS are running. + +```bash +Kubernetes control plane is running at https://127.0.0.1:43899 +CoreDNS is running at https://127.0.0.1:43899/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy + +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. +``` + +## Deploy Cilium + +Create a Cilium configuration file named cilium-helm-values.yaml with the following content: + +```yaml +# cilium-helm-values.yaml +kubeProxyReplacement: strict +k8sServiceHost: kind-control-plane +k8sServicePort: 6443 +hostServices: + enabled: false +externalIPs: + enabled: true +nodePort: + enabled: true +hostPort: + enabled: true +image: + pullPolicy: IfNotPresent +ipam: + mode: kubernetes +hubble: + enabled: true + relay: + enabled: true + ui: + enabled: true + ingress: + enabled: true + annotations: + kubernetes.io/ingress.class: nginx + hosts: + - hubble-ui.127.0.0.1.nip.io +``` + +This configuration enables Cilium with kube-proxy replacement and various service options, including Hubble for network observability. + +Install Cilium using Helm: + +```bash +# Don't forget to use correct cluster context +# Add the Cilium Helm repository +helm repo add cilium https://helm.cilium.io/ + +# Deploy Cilium with the specified values +helm upgrade --install --namespace kube-system --repo https://helm.cilium.io cilium cilium --values cilium-helm-values.yaml +``` + +Check the status of the Cilium pods to ensure they are running: + +```bash +kubectl get pods -n kube-system +``` + +You should see the Cilium and Hubble components running without issues. + +```bash +NAME READY STATUS RESTARTS AGE +cilium-2ldns 1/1 Running 0 30h +cilium-b877s 1/1 Running 0 30h +cilium-mhs9c 1/1 Running 0 30h +cilium-operator-7fc58985c4-m2kbv 1/1 Running 0 30h +cilium-operator-7fc58985c4-mq5pc 1/1 Running 0 30h +cilium-sqrdv 1/1 Running 0 30h +coredns-7db6d8ff4d-ltcjq 1/1 Running 0 30h +coredns-7db6d8ff4d-s6c6f 1/1 Running 0 30h +etcd-kind-control-plane 1/1 Running 0 30h +hubble-relay-6d88849768-2wcjn 1/1 Running 0 30h +hubble-ui-59bb4cb67b-g79pz 2/2 Running 0 30h +kube-apiserver-kind-control-plane 1/1 Running 0 30h +kube-controller-manager-kind-control-plane 1/1 Running 0 30h +kube-scheduler-kind-control-plane 1/1 Running 0 30h +``` + +## Deploy NGINX Ingress Controller + +Deploy the NGINX Ingress Controller using the following command: + +```bash +# Replace with the latest version from the official repository +kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/kind/deploy.yaml +``` + +Verify the NGINX Ingress Controller is running: + +```bash +kubectl get pods -n ingress-nginx +``` + +You should see the NGINX Ingress Controller pod running. + +```bash +NAME READY STATUS RESTARTS AGE +ingress-nginx-admission-create-2p9jm 0/1 Completed 0 30h +ingress-nginx-admission-patch-tmnrp 0/1 Completed 0 30h +ingress-nginx-controller-d45d995d4-lqgr6 1/1 Running 0 30h +``` diff --git a/frinx-machine/installation/customization/frinx-machine-customization/readme.md b/frinx-machine/installation/customization/frinx-machine-customization/readme.md new file mode 100644 index 000000000..50ec34633 --- /dev/null +++ b/frinx-machine/installation/customization/frinx-machine-customization/readme.md @@ -0,0 +1,181 @@ +--- +expanded: false +order: 9999 +--- + +# Helm Chart Installation Guide with customization + +This guide provides the step-by-step instructions for installing FRINX Machine on a Kubernetes cluster using Helm charts with customization via helm dependency values. + +## Prerequisites + +- `Cluster`: Make sure that you cluster is running. +- `Helm`: Make sure that Helm is installed. Follow the Helm installation guide if necessary. + + +## Step 1: Add the FRINX Helm Repository + +Add the FRINX Helm repository and update the repository list: + +```bash +helm repo add frinx https://FRINXio.github.io/helm-charts +helm repo update +``` + +## Step 2: Install Operators and CRDs + +Install FRINX Machine operators and custom resource definitions (CRDs): + +```bash +helm install -n frinx --create-namespace frinx-machine-operators frinx/frinx-machine-operators +``` + +Verify the installation by checking the pods in the frinx namespace: + +```bash +kubectl get pods -n frinx +``` + +You should see output similar to: + +```bash +NAME READY STATUS RESTARTS AGE +arango-frinx-machine-operators-operator-6dfdff75bd-cnwmp 1/1 Running 0 25s +arango-frinx-machine-operators-operator-6dfdff75bd-k8kqp 1/1 Running 0 25s +frinx-machine-operators-cloudnative-pg-d9566444c-85w8g 1/1 Running 0 25s +``` + +## Step 3: Create Docker Registry Secret + +Please complete this step before continue [!ref icon="briefcase" text="Create docker regitry secret"](/frinx-machine/installation/docker-registry-secret/readme.md) + +## Step 4: Customize FRINX Machine + +To customize the deployment of FRINX Machine, you need to create a folder that will contain the necessary Helm chart files: Chart.yaml and values.yaml. This folder will serve as the root for your customized Helm chart, with FRINX Machine as a dependency. + +### Create the Chart Configuration + +1. Create the Chart.yaml file: + +This file contains the metadata for your Helm chart and specifies FRINX Machine as a dependency. + +```yaml +# Chart.yaml +apiVersion: v2 +name: frinx-machine +description: Kubernetes deployment of FRINX-machine +icon: https://avatars.githubusercontent.com/u/23452093?s=200&v=4 +type: application +version: 6.1.0 +maintainers: + - name: FRINX +dependencies: + - name: frinx-machine + repository: https://FRINXio.github.io/helm-charts + version: 9.0.0 +``` + +This configuration sets up the basic information about the Helm chart and defines FRINX Machine as a dependency, pulling it from the specified repository. + +2. Check Dependency Chart Details: + +For mapping the Helm chart release to the product release and more details on the FRINX Machine Helm chart, refer to the [FRINX Machine Helm chart documentation on Artifact Hub](https://artifacthub.io/packages/helm/frinx-helm-charts/frinx-machine). + +### Customize the Values + +3. Create the values.yaml file: + + This file contains custom configurations for the FRINX Machine subcharts. Customization options are based on the documentation of each subchart, which can be found on Artifact Hub. + +```yaml +# values.yaml +frinx-machine: + krakend: + ingress: + enabled: true + className: nginx + annotations: + nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600" + nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" + nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" + hosts: + - host: krakend.127.0.0.1.nip.io + paths: + - path: "/" + pathType: ImplementationSpecific + + uniconfig: + image: + repository: "frinxio/uniconfig" + + performance-monitor: + image: + repository: "frinxio/performance-monitor" +``` + + - krakend: Configures the Ingress settings for the KrakenD API Gateway. It enables the Ingress, sets the class to nginx, and includes custom annotations for proxy timeouts. The host is set to krakend.127.0.0.1.nip.io with the path /. + + - uniconfig: Specifies the Docker image repository for the Uniconfig component. + + - performance-monitor: Specifies the Docker image repository for the Performance Monitor component. + +### Customize Subcharts + +4. Subchart Customization: + + For more detailed customization possibilities, refer to the [documentation for each dependency Helm chart version](https://artifacthub.io/packages/search?org=frinx&sort=relevance&page=1). This will provide information on all configurable parameters and how to adjust them to suit your deployment needs. + + +By following these steps, you can effectively customize the deployment of FRINX Machine using Helm. Ensure you refer to the individual subchart documentation on Artifact Hub for specific configuration options and further customization. + + +## Step 5: Install FRINX Machine + +Install the FRINX Machine using Helm: + +```bash +helm dependency update +helm install -n frinx frinx-machine . -f values.yaml +``` + +Verify the installation by checking the pods in the frinx namespace: + +```bash +kubectl get pods -n frinx +``` + +You should see output similar to: + +```bash +NAME READY STATUS RESTARTS AGE +arango-frinx-machine-operators-operator-6dfdff75bd-h6mxb 1/1 Running 0 21m +arango-frinx-machine-operators-operator-6dfdff75bd-xh9x6 1/1 Running 0 21m +arangodb-sngl-yxxouifa-e0f232 1/1 Running 0 11m +conductor-server-6757754659-tss78 2/2 Running 0 19m +device-induction-56fdd555b8-j646n 1/1 Running 0 19m +frinx-frontend-7c596b6bfc-qgthp 2/2 Running 0 19m +frinx-machine-operators-cloudnative-pg-d9566444c-fgp5w 1/1 Running 0 21m +grafana-64986657b8-zzc5x 1/1 Running 0 19m +influxdb-0 1/1 Running 0 19m +inventory-57994dcd85-9v2f9 1/1 Running 0 19m +kafka-controller-0 1/1 Running 0 19m +krakend-85bb6cd88b-6ldg7 2/2 Running 0 19m +loki-0 1/1 Running 0 19m +performance-monitor-f6885b4dc-4wrfp 1/1 Running 0 19m +postgresql-1 1/1 Running 0 11m +postgresql-2 1/1 Running 0 11m +promtail-zfmkn 1/1 Running 0 19m +resource-manager-d98d6866b-w5d6x 1/1 Running 0 19m +swagger-ui-5b9fc85b99-8tzdd 1/1 Running 0 19m +telegraf-ds-drsh7 1/1 Running 0 19m +timescale-db-0 1/1 Running 0 19m +topology-discovery-6d8c975876-gqg79 2/2 Running 0 19m +uc-zone-lb-9cd56dd7-x82tz 1/1 Running 0 19m +uniconfig-controller-75d945f9c5-lggdb 1/1 Running 0 12m +uniconfig-postgresql-1 1/1 Running 0 12m +uniconfig-postgresql-2 1/1 Running 0 12m +``` + +## Step 6: Access the UI + +Visit Frinx Machine page in your browser on `https://krakend.127.0.0.1.nip.io/frinxui` diff --git a/frinx-machine/installation/customization/index.yml b/frinx-machine/installation/customization/index.yml new file mode 100644 index 000000000..ff9dd1bc8 --- /dev/null +++ b/frinx-machine/installation/customization/index.yml @@ -0,0 +1,2 @@ +Label: Advanced Helm Chart Installation +order: 7777 \ No newline at end of file diff --git a/frinx-machine/installation/docker-registry-secret/readme.md b/frinx-machine/installation/docker-registry-secret/readme.md new file mode 100644 index 000000000..ced88adfa --- /dev/null +++ b/frinx-machine/installation/docker-registry-secret/readme.md @@ -0,0 +1,20 @@ +--- +expanded: false +order: 9999 +label: Docker Registry Secret +--- + +# Create Docker Registry Secret + +Create a kubernetes Docker registry secret for pulling images from private registry: + +```bash +# PLACEHOLDERS must be replaced with user credentials +kubectl create secret -n frinx docker-registry regcred \ + --docker-server="https://index.docker.io/v1/" \ + --docker-username="" \ + --docker-password="" +``` + +For more info about accessing private images, visit [Download Frinx Uniconfig](https://docs.frinx.io/frinx-uniconfig/getting-started/#download-frinx-uniconfig) + diff --git a/frinx-machine/installation/index.yml b/frinx-machine/installation/index.yml new file mode 100644 index 000000000..ab4b91a14 --- /dev/null +++ b/frinx-machine/installation/index.yml @@ -0,0 +1,4 @@ +icon: gear +expanded: false +order: 8888 +label: Kubernetes installation \ No newline at end of file diff --git a/frinx-machine/installation/oauth2-proxy/readme.md b/frinx-machine/installation/oauth2-proxy/readme.md new file mode 100644 index 000000000..a58197444 --- /dev/null +++ b/frinx-machine/installation/oauth2-proxy/readme.md @@ -0,0 +1,165 @@ +--- +expanded: false +order: 1992 +--- + +# Authorization and authentification + +Follow official helm chart repository for [oauth2-proxy](https://artifacthub.io/packages/helm/oauth2-proxy/oauth2-proxy). +Don't forget to update the version to a more recent. + +```yaml +# Chart.yaml +apiVersion: v2 +name: azure-oauth2-proxy +description: Kubernetes deployment of azure-oauth2-proxy +type: application +version: 6.1.0 +maintainers: + - name: FRINX +dependencies: + - name: oauth2-proxy + repository: https://oauth2-proxy.github.io/manifests + version: 7.7.4 + condition: oauth2-proxy.enabled +``` + +```yaml +# templates/oauth2-proxy-secret.yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: oauth2-proxy + namespace: frinx +data: + client-id: + client-secret: + cookie-secret: +``` + ++++ Azure AD + +Follow [oauth2-proxy official documentation](https://oauth2-proxy.github.io/oauth2-proxy/configuration/providers/azure) to configure Azure AD. + +```yaml +# templates/azure-redis-secret.yaml +apiVersion: v1 +kind: Secret +metadata: + name: {{ .Chart.Name }}-redis-secret +type: Opaque +data: + redis-password: {{ .Values.redisPassword.password | b64enc }} +``` + +```yaml +# values.yaml + +x-frinx-image-pull-secret: &frinx-image-pull-secret regcred + +oauth2-proxy: + enabled: true + + fullnameOverride: "oauth2-proxy" + + image: + repository: "frinxio/oauth2-proxy" + tag: "6.1.0-alpine" + + imagePullSecrets: + - name: *frinx-image-pull-secret + + redis: + enabled: true + architecture: standalone + + ingress: + enabled: true + className: nginx + hosts: + - "fm.127.0.0.1.nip.io" + annotations: + nginx.ingress.kubernetes.io/force-ssl-redirect: "true" + nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600" + nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" + nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" + + sessionStorage: + type: redis + redis: + existingSecret: "azure-oauth2-proxy-redis-secret" + passwordKey: "redis-password" + + config: + existingSecret: oauth2-proxy + + configFile: |- + # DEFAULT CONFIGURATION + # https://oauth2-proxy.github.io/oauth2-proxy/configuration/overview + + custom_sign_in_logo = "/tmp/frinx/frinx.png" + upstreams = ["http://krakend:8080"] + + cookie_secure = true + cookie_expire = 0 + # cookie_httponly = true + + pass_authorization_header = false + proxy_websockets = true + + email_domains = [ "*" ] + + # DEPENDS ON DEPLOYMENT SETUP, INGRESS CONFIGURATION + cookie_domains = [ "fm.127.0.0.1.nip.io" ] + whitelist_domains = [ "fm.127.0.0.1.nip.io" ] + + provider = "azure" + azure_tenant = "YOUR_TENANT_ID" + oidc_issuer_url = "https://login.microsoftonline.com/YOUR_TENANT_ID/v2.0" + + login_url = "https://login.microsoftonline.com" + redirect_url = "https://fm.127.0.0.1.nip.io/oauth2/callback" + + ssl_insecure_skip_verify = false + pass_access_token = false + + skip_jwt_bearer_tokens = true + + extraArgs: + azure-graph-group-field: displayName + +redisPassword: + password: "yourPassword" + +``` + ++++ + + +## Install Oauth2-Proxy +```bash +helm dependency update +helm install -n frinx oauth2-proxy . -f values.yaml +``` + + +## Configure RBAC + +Rbac functionality can be configured on subchart level. + +- https://artifacthub.io/packages/helm/frinx-helm-charts/krakend?modal=values&path=rbac +- https://artifacthub.io/packages/helm/frinx-helm-charts/workflow-manager?modal=values&path=rbac +- https://artifacthub.io/packages/helm/frinx-helm-charts/topology-discovery?modal=values&path=rbac + + + env: + X_AUTH_USER_GROUP: *frinx-rbac-admin-role + +```yaml +# values.yaml + +https://artifacthub.io/packages/helm/frinx-helm-charts/krakend?modal=values&path=rbac + + +``` \ No newline at end of file diff --git a/frinx-machine/monitoring/readme.md b/frinx-machine/monitoring/readme.md index cfca6eb49..f798102d5 100644 --- a/frinx-machine/monitoring/readme.md +++ b/frinx-machine/monitoring/readme.md @@ -1,7 +1,7 @@ --- label: Monitoring with Grafana icon: pulse -order: 1500 +order: 100 --- # Grafana @@ -24,7 +24,6 @@ Grafana in FRINX Machine monitors multitude of metrics. At this time, these are: - Device monitoring - FRINX Machine logs - Node monitoring -- Swarm monitoring - SSL monitoring - UniConfig-controller monitoring - Workflows monitoring @@ -47,20 +46,6 @@ It reports info like **CPU utilisation**, **Memory utilisation**, **Disk usage** ![Node Monitoring](nodemonitoring.png) -### FRINX Machine Swarm Monitoring - -This dashboard monitors metrics specifically tied to FM within the VM/System. -Metrics like **Up-time**, **Available/Utilised memory**, **Number of running/stopped containers**, **CPU usage per container**, **Memory usage per container**, I**ncoming/Outcoming network traffic**, etc. - -![Swarm Monitoring](swarmmonitoring.png) - - -### SSL Monitoring - -This dashboard displays data about your SSL certificates. It displays dates until your certificates are valid. - -![SSL Monitoring](sslmonitoring2.png) - ### UniConfig Controller Monitoring This dashboard keeps track of various UniConfig transactions. It displays number of transactions at a given time. diff --git a/frinx-machine/monitoring/sslmonitoring2.png b/frinx-machine/monitoring/sslmonitoring2.png deleted file mode 100644 index fea3596c6..000000000 Binary files a/frinx-machine/monitoring/sslmonitoring2.png and /dev/null differ diff --git a/frinx-machine/use-cases/add-to-inventory-and-install/index.yml b/frinx-machine/use-cases/add-to-inventory-and-install/index.yml new file mode 100644 index 000000000..b3bc32240 --- /dev/null +++ b/frinx-machine/use-cases/add-to-inventory-and-install/index.yml @@ -0,0 +1 @@ +order: 999 diff --git a/frinx-machine/use-cases/add-to-inventory-and-install/readme.md b/frinx-machine/use-cases/add-to-inventory-and-install/readme.md index 429df4f11..461ff5f7c 100644 --- a/frinx-machine/use-cases/add-to-inventory-and-install/readme.md +++ b/frinx-machine/use-cases/add-to-inventory-and-install/readme.md @@ -8,7 +8,7 @@ At the FRINX Machine **Dashboard** under **Device Inventory** section click on * ## JSON examples -New devices added to Device inventory are defined by JSON code snippets. (These snippets are part of UniConfig RPC connection-manager:install-node.) They are similar to [Blueprints](/frinx-workflow-manager/blueprints). This snippet is going to be filled into **Mount parameters** field. +New devices added to Device inventory are defined by JSON code snippets. (These snippets are part of UniConfig RPC connection-manager:install-node.) They are similar to [Blueprints](/frinx-device-inventory/blueprints). This snippet is going to be filled into **Mount parameters** field. Another way is to add a new device from blueprint: toggle the **Use blueprint?** switch in the form and choose the blueprint that you want to use from **Select blueprint** drop-down list. diff --git a/frinx-machine/use-cases/create-loopback-all-in-uniconfig/index.yml b/frinx-machine/use-cases/create-loopback-all-in-uniconfig/index.yml new file mode 100644 index 000000000..fc19f0352 --- /dev/null +++ b/frinx-machine/use-cases/create-loopback-all-in-uniconfig/index.yml @@ -0,0 +1,2 @@ +label: Create loopback all in uniconfig +order: 996 diff --git a/frinx-machine/use-cases/04Create_loopback_all_in_uniconfig/readme.md b/frinx-machine/use-cases/create-loopback-all-in-uniconfig/readme.md similarity index 100% rename from frinx-machine/use-cases/04Create_loopback_all_in_uniconfig/readme.md rename to frinx-machine/use-cases/create-loopback-all-in-uniconfig/readme.md diff --git a/frinx-machine/use-cases/create-loopbacks/Loop-Create.png b/frinx-machine/use-cases/create-loopbacks/Loop-Create.png deleted file mode 100644 index c9da52aeb..000000000 Binary files a/frinx-machine/use-cases/create-loopbacks/Loop-Create.png and /dev/null differ diff --git a/frinx-machine/use-cases/create-loopbacks/Loop-DynamicFork.png b/frinx-machine/use-cases/create-loopbacks/Loop-DynamicFork.png deleted file mode 100644 index e92563bc7..000000000 Binary files a/frinx-machine/use-cases/create-loopbacks/Loop-DynamicFork.png and /dev/null differ diff --git a/frinx-machine/use-cases/create-loopbacks/Loop-Output.png b/frinx-machine/use-cases/create-loopbacks/Loop-Output.png deleted file mode 100644 index 7d9a66ee4..000000000 Binary files a/frinx-machine/use-cases/create-loopbacks/Loop-Output.png and /dev/null differ diff --git a/frinx-machine/use-cases/create-loopbacks/readme.rst b/frinx-machine/use-cases/create-loopbacks/readme.rst deleted file mode 100644 index 6e4edd2ee..000000000 --- a/frinx-machine/use-cases/create-loopbacks/readme.rst +++ /dev/null @@ -1,58 +0,0 @@ -# Create loopback interfaces - -This use-case discusses workflow **Create_loopback_all_in_uniconfig**. This workflow creates a loopback interface on all devices that are installed in the inventory. - -!!!warning -Make sure you didn't skip -installing all devices from inventory otherwise this workflow might not work correctly. -[!ref text="Install all devices from inventory"](../install-all-devices-from-inventory/) -!!! - -## Create loopback address on devices stored in the inventory - -!!!danger -By default, this workflow doesn't work on Junos, **IOSXR653**, **IOSXR663** and **SAOS8** devices. This is because we are sending the data to the "frinx-openconfig-interfaces" model, which either isn't present on those devices or its structure differs from other devices we write to. If you want to write data to those devices, you will have to edit the **Create_loopback_all_in_uniconfig** workflow. In this case, we successfully write to **IOS**, **XR**, **VRP**, **SAOS6**, **Leaf**, **Spine**. -!!! - -In the next step, we execute a workflow that creates loopback on -every installed device in UniConfig. - -In the `Workflow Manager` section, click on the `Explore` button. A list of all workflows will appear. Search for a workflow called **Create_loopback_all_in_uniconfig**. - -As for the input, the only thing you need to input is "loopback_id", the name of the loopback interface e.g., **77**. -After executing, click on the numeric link that appears to see workflow progression and results. - -![Executed workflows](Loop-Create.png) - -On the results page you will see 5 individual tasks: - -### INVENTORY_get_all_devices_as_dynamic_fork_tasks - -This workflow displays the list of all nodes in the inventory. It parses the output in the correct format for the dynamic fork, which creates a dynamic amount of tasks, depending on the number of devices in the inventory. It allows us to use one task, disregarding the state of the inventory. - -### SUB_WORKFLOW - -This is the dynamic fork sub-workflow. In this case, it creates **UNICONFIG_write_structured_device_data** for every individual device in the inventory. Thanks to this, you get detailed information on the progress and succession of every device. - -### UNICONFIG_calculate_diff - -This RPC creates a diff between the actual UniConfig topology nodes and the intended UniConfig topology nodes. Find full details about calculate diff [here](/frinx-uniconfig/user-guide/uniconfig-operations/uniconfig-node-manager/rpc_calculate-diff/#rpc-calculate-diff) - -### UNICONFIG_dryrun_commit - -The RPC will resolve the diff between the actual and intended configuration of nodes by using UniConfig Node Manager. Changes for CLI nodes are applied by using cli-dryrun mountpoint which only stores translated CLI commands to the cli-dry-run journal. After all, changes are applied, the cli-dryrun journal is read and an RPC output is created and returned. Find full details about dryrun commit [here](/frinx-uniconfig/user-guide/uniconfig-operations/dryrun-manager/#rpc-dryrun-commit) - - -### UNICONFIG_commit - -This is the final task that actually commits the intended configuration to the devices. Find full details about commit [here](/frinx-uniconfig/user-guide/uniconfig-operations/uniconfig-node-manager/rpc_commit/#rpc-commit) - - -![Workflows details](Loop-Output.png) - -![Dynamic Fork details](Loop-DynamicFork.png) - -If the workflow has been completed successfully, a loopback interface is now present on all devices in the inventory. - -The execution of all workflows can be manual, via the UI, or can be -automated and scheduled via the REST API of conductor server. \ No newline at end of file diff --git a/frinx-machine/use-cases/create-workflow/index.yml b/frinx-machine/use-cases/create-workflow/index.yml new file mode 100644 index 000000000..d0bb3695c --- /dev/null +++ b/frinx-machine/use-cases/create-workflow/index.yml @@ -0,0 +1,2 @@ +label: Create workflow +order: 990 diff --git a/frinx-machine/use-cases/03create-workflow/readme.md b/frinx-machine/use-cases/create-workflow/readme.md similarity index 100% rename from frinx-machine/use-cases/03create-workflow/readme.md rename to frinx-machine/use-cases/create-workflow/readme.md diff --git a/frinx-machine/use-cases/dashboard/index.yml b/frinx-machine/use-cases/dashboard/index.yml new file mode 100644 index 000000000..a419acf0f --- /dev/null +++ b/frinx-machine/use-cases/dashboard/index.yml @@ -0,0 +1,2 @@ +label: Dashboard +order: 1000 diff --git a/frinx-machine/use-cases/01dashboard/readme.md b/frinx-machine/use-cases/dashboard/readme.md similarity index 64% rename from frinx-machine/use-cases/01dashboard/readme.md rename to frinx-machine/use-cases/dashboard/readme.md index d433dddc8..3a4f7c954 100644 --- a/frinx-machine/use-cases/01dashboard/readme.md +++ b/frinx-machine/use-cases/dashboard/readme.md @@ -2,4 +2,4 @@ After logging into FRINX Machine, you can see the **FRINX Machine dashboard**: -![FRINX Machine dashboard](../demo_pics/fm2.0_dashboard.png) +![FRINX Machine dashboard](../demo_pics/fm_dashboard.png) diff --git a/frinx-machine/use-cases/demo_pics/fm_dashboard.png b/frinx-machine/use-cases/demo_pics/fm_dashboard.png index 6592c92e9..4553002f7 100644 Binary files a/frinx-machine/use-cases/demo_pics/fm_dashboard.png and b/frinx-machine/use-cases/demo_pics/fm_dashboard.png differ diff --git a/frinx-machine/use-cases/demo_pics/fm_dashboard_original.png b/frinx-machine/use-cases/demo_pics/fm_dashboard_original.png deleted file mode 100644 index 044597f4b..000000000 Binary files a/frinx-machine/use-cases/demo_pics/fm_dashboard_original.png and /dev/null differ diff --git a/frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_execute.png b/frinx-machine/use-cases/demo_pics/install_all_from_inventory_execute.png similarity index 100% rename from frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_execute.png rename to frinx-machine/use-cases/demo_pics/install_all_from_inventory_execute.png diff --git a/frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_pop_up_window.png b/frinx-machine/use-cases/demo_pics/install_all_from_inventory_pop_up_window.png similarity index 100% rename from frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_pop_up_window.png rename to frinx-machine/use-cases/demo_pics/install_all_from_inventory_pop_up_window.png diff --git a/frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_result.png b/frinx-machine/use-cases/demo_pics/install_all_from_inventory_result.png similarity index 100% rename from frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_result.png rename to frinx-machine/use-cases/demo_pics/install_all_from_inventory_result.png diff --git a/frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_search.png b/frinx-machine/use-cases/demo_pics/install_all_from_inventory_search.png similarity index 100% rename from frinx-machine/use-cases/install-all-devices-from-inventory/install_all_from_inventory_search.png rename to frinx-machine/use-cases/demo_pics/install_all_from_inventory_search.png diff --git a/frinx-machine/use-cases/device-configure-loopback/index.yml b/frinx-machine/use-cases/device-configure-loopback/index.yml new file mode 100644 index 000000000..5557b4cdf --- /dev/null +++ b/frinx-machine/use-cases/device-configure-loopback/index.yml @@ -0,0 +1,2 @@ +label: Device configure loopback +order: 998 diff --git a/frinx-machine/use-cases/02device-configure-loopback/readme.md b/frinx-machine/use-cases/device-configure-loopback/readme.md similarity index 100% rename from frinx-machine/use-cases/02device-configure-loopback/readme.md rename to frinx-machine/use-cases/device-configure-loopback/readme.md diff --git a/frinx-machine/use-cases/fm-dashboard.png b/frinx-machine/use-cases/fm-dashboard.png deleted file mode 100644 index e43758129..000000000 Binary files a/frinx-machine/use-cases/fm-dashboard.png and /dev/null differ diff --git a/frinx-machine/use-cases/fm2.0_dashboard.png b/frinx-machine/use-cases/fm2.0_dashboard.png deleted file mode 100644 index 4553002f7..000000000 Binary files a/frinx-machine/use-cases/fm2.0_dashboard.png and /dev/null differ diff --git a/frinx-machine/use-cases/index.md b/frinx-machine/use-cases/index.md index 3a7e77037..ef3d8f86a 100644 --- a/frinx-machine/use-cases/index.md +++ b/frinx-machine/use-cases/index.md @@ -1,58 +1,60 @@ --- label: Demo Use Cases icon: briefcase -order: 1000 +order: 100 --- # Demo Use Cases -There are several ways of installing device/devices in FRINX Machine. You -can run a pre-packaged workflow to install a network device. You can -add devices to Device inventory and install devices from there - you can install a single device -or you can install several selected devices simultaneously. +The following use cases demonstrate the basic usage of FRINX Machine. +These examples will help you explore the platform's capabilities, including executing prepared workflows, creating custom workflows via the workflow builder, and manually managing your device inventory. -To start installing devices open up FRINX Machine UI. +#### 1. Executing Prepared Workflows +- Experience the power of automation by running pre-configured workflows. +- Learn how to efficiently manage network tasks using automated processes. -## Open FRINX Machine UI +#### 2. Creating Custom Workflows +- Utilize the workflow builder to create and customize your workflows. +- Tailor processes to your specific needs, enhancing operational efficiency. -Note: you can use our demo at https://demo.frinx.io +#### 3. Managing Device Inventory +- Add devices to your inventory manually, providing flexibility to experiment with different configurations. +- This feature is particularly useful for testing and deploying your own networking devices. -Open your browser and go to `[host_ip]` if installed locally go to -`https://localhost`. This is the GUI (UniConfig UI) for managing all of -your devices. You should see a screen like this: +#### 4. Controlling Devices +- Manage devices in your inventory through the user interface (UI). +- Execute operations using prepared workflows, ensuring streamlined control and management. -[![FM 2.0 Dashboard](fm2.0_dashboard.png)](fm2.0_dashboard.png) +#### 5. Configuring Network Services +- Set up L2VPNs between virtual devices, enhancing network connectivity and performance. +- Create loopbacks on device interfaces either manually or through automated workflows, demonstrating the versatility of FRINX Machine. -!!! -For Demo Use Cases, please download repository [fm-workflows](https://github.com/FRINXio/fm-workflows) +These use cases serve as a starting point for exploring the extensive capabilities of FRINX Machine. +They illustrate how the platform can be utilized for a variety of networking tasks, from basic device management to complex network configurations. -Make sure FRINX-machine is running, navigate to -``` - cd fm-workflows -``` +## Install Demo Use Case workflows -and execute +Use [demo-worklows helm chart](https://artifacthub.io/packages/helm/frinx-helm-charts/demo-workflows) to run sample-topology and frinx-demo-workflows conducor worker. -``` - ./startup.sh -``` +### Install Frinx Machine +Make sure your Frinx Machine is running. +[!ref](../installation/basic/readme.md) -Imported workflows and tasks will appear in FRINX-Machine UI, -immediately after the import finishes. -!!! +### Install Demo workflows and Sample Topology -In the following articles, you will learn how to install a device to -UniConfig and how to install all devices from Device inventory to UniConfig. -Device inventory is automatically filled with sample devices for you when you start FRINX Machine with [fm-workflows](https://github.com/FRINXio/fm-workflows). -Later we will learn how to create a loopback address on the devices -that we previously stored in Device inventory and how to read the journals -of these devices. +```bash +helm repo add frinx https://FRINXio.github.io/helm-charts +helm repo update +``` -Then we will take a look at how to obtain data from the -devices that you have in the network. +```bash +helm install -n frinx frinx-demo-workflows frinx/demo-workflows +``` -Lastly we will take a look at how you can add devices to your inventory -manually. This might be useful if you want to play around with the -FRINX Machine a bit and try to install your own networking devices. +```bash +NAME READY STATUS RESTARTS AGE +sample-topology-5d677db769-zxbzs 1/1 Running 0 8h +frinx-demo-workflows-6f8c7666b8-nzmd9 1/1 Running 0 8h +``` diff --git a/frinx-machine/use-cases/install-all-devices-from-inventory/index.yml b/frinx-machine/use-cases/install-all-devices-from-inventory/index.yml new file mode 100644 index 000000000..9c2b64a26 --- /dev/null +++ b/frinx-machine/use-cases/install-all-devices-from-inventory/index.yml @@ -0,0 +1,2 @@ +label: Install all devices from inventory +order: 999 diff --git a/frinx-machine/use-cases/install-all-devices-from-inventory/readme.md b/frinx-machine/use-cases/install-all-devices-from-inventory/readme.md index fa4395542..0889b77ae 100644 --- a/frinx-machine/use-cases/install-all-devices-from-inventory/readme.md +++ b/frinx-machine/use-cases/install-all-devices-from-inventory/readme.md @@ -8,11 +8,11 @@ Follow these instructions to use the workflow: At the FRINX Machine **Dashboard** under **Workflow Manage** section click on **Explore workflows** panel. The page titled **Workflow definitions** opens. Use **Search workflow by name** input box and fill in **Install_all_from_inventory** and click **Search** button. -![Search for workflow Install_all_from_inventory](install_all_from_inventory_search.png) +![Search for workflow Install_all_from_inventory](../demo_pics/install_all_from_inventory_search.png) The list of workflows narrows down to two items - workflows Install_all_from_inventory and Uninstall_all_from_inventory. Click blue `Execute` button (blue play icon) located on the row next to the workflow. The form titled with the name of workflow **Install_all_from_inventory** appears and optionally you can fill in the input parameter **labels** which allows to select a subset of devices to install. (You can specify a device label while adding devices to Device Inventory.) We want to install all uninstalled devices - do not fill in the input **labels** and click **Execute workflow** button. As a result to the left of the **Execute workflow** button will appear the link **Executed workflow in detail**. -![Execute workflow Install_all_from_inventory](install_all_from_inventory_pop_up_window.png) +![Execute workflow Install_all_from_inventory](../demo_pics/install_all_from_inventory_pop_up_window.png) After you click the link **Executed workflow in detail** you will be navigated to a page with details of the executed workflow - it displays individual tasks for this workflow, it is possible to click whatever task and examine its inputs and outputs, whether it was successful or unsuccessful etc. diff --git a/frinx-machine/use-cases/lacp/LACP-SubflowDynamicFork.png b/frinx-machine/use-cases/lacp/LACP-SubflowDynamicFork.png deleted file mode 100644 index 59882218a..000000000 Binary files a/frinx-machine/use-cases/lacp/LACP-SubflowDynamicFork.png and /dev/null differ diff --git a/frinx-machine/use-cases/lacp/LACP-WorkflowConf.png b/frinx-machine/use-cases/lacp/LACP-WorkflowConf.png deleted file mode 100644 index d4ae5c482..000000000 Binary files a/frinx-machine/use-cases/lacp/LACP-WorkflowConf.png and /dev/null differ diff --git a/frinx-machine/use-cases/lacp/lacp.rst b/frinx-machine/use-cases/lacp/lacp.rst deleted file mode 100644 index 9301961e2..000000000 --- a/frinx-machine/use-cases/lacp/lacp.rst +++ /dev/null @@ -1,62 +0,0 @@ - -LACP workflows -============== - -This workflow is using UniConfig to create LAG interface on two nodes and assigns the bundle id to the given interfaces on both nodes. - -**Supported device**: ios-xr mounted as a cli device - - -.. important:: - - Make sure you didn't skip :ref:`mounting all devices in inventory `, otherwise this workflow might not work correctly. - - -Creating a link aggregation between two nodes ----------------------------------------------- - -In the next step we will create a link between node 1 and node 2. - -Click on: :menuselection:`Home --> Workflows --> Definitions`. Then search for the workflow: **Link_aggregation**. Click on **Input**. - - -After providing input parameters, you can execute the workflow. - - -**Example of input parameters**: - -.. code-block:: text - - node1: XR01 - bundle_ether_id: 3 - bundle_ether_enabled: true - node2: XR02 - node1_ifaces: GigabitEthernet0/0/0/0, GigabitEthernet0/0/0/1 - node2_ifaces: GigabitEthernet0/0/0/1, GigabitEthernet0/0/0/2, GigabitEthernet0/0/0/3 - - - -.. image:: LACP-WorkflowConf.png - :target: /_images/LACP-WorkflowConf.png - :alt: LACP Config - - -Workflow execution -------------------- - -After workflow execution, click the ID of the workflow and click **Execution Flow,** you will be able to follow the progress of the execution of the workflow. - - -.. image:: progress.png - :target: /_images/progress.png - :alt: Wf diagram - - -The workflow diagram in progress will color the steps according to your progress. - - -.. image:: LACP-SubflowDynamicFork.png - :target: /_images/LACP-SubflowDynamicFork.png - :alt: Completed Dynamic Fork - -The diagram displayed above shows that the workflow has been successfully completed. diff --git a/frinx-machine/use-cases/lacp/progress.png b/frinx-machine/use-cases/lacp/progress.png deleted file mode 100644 index 9f6f549da..000000000 Binary files a/frinx-machine/use-cases/lacp/progress.png and /dev/null differ diff --git a/frinx-machine/use-cases/lacp/task-details.png b/frinx-machine/use-cases/lacp/task-details.png deleted file mode 100644 index 2aca50d69..000000000 Binary files a/frinx-machine/use-cases/lacp/task-details.png and /dev/null differ diff --git a/frinx-machine/use-cases/obtain-platform-inventory-data/Platform-Inventory-Execute.png b/frinx-machine/use-cases/obtain-platform-inventory-data/Platform-Inventory-Execute.png deleted file mode 100644 index b5454f2e8..000000000 Binary files a/frinx-machine/use-cases/obtain-platform-inventory-data/Platform-Inventory-Execute.png and /dev/null differ diff --git a/frinx-machine/use-cases/obtain-platform-inventory-data/obtain-platform-inventory-data.rst b/frinx-machine/use-cases/obtain-platform-inventory-data/obtain-platform-inventory-data.rst deleted file mode 100644 index de63761e3..000000000 --- a/frinx-machine/use-cases/obtain-platform-inventory-data/obtain-platform-inventory-data.rst +++ /dev/null @@ -1,47 +0,0 @@ - -Obtain platform inventory data -============================== - - -In this section we show how users can execute workflows to obtain platform inventory data from devices in the network and to store them in the inventory (Elasticsearch). - -The goal of this use case is to collect inventory information about physical devices via their vendor specific NETCONF or CLI interfaces, convert this information into OpenConfig data structures and store the resulting information as a child entry to its associated parent in Elasticsearch. - -The outcome is that users can manage their physical network inventory (line cards, route processors, modules, transceivers, etc …) across different hardware vendors in real-time via a single uniform interface. - -.. important:: - - Make sure you didn't skip :ref:`mounting all devices in inventory `, otherwise this workflow might not work correctly. - - -.. error:: - - This use case does not work with VRP01 and netconf-testtool devices. Because of that, before executing other workflows, you need to unmount the "VRP01" and "netconf-testtool" devices that were previously mounted by the **Mount_all_from_inventory** workflow. In order to unmount these devices, go to :menuselection:`Home --> UniConfig` select the "VRP01" and "netconf-testtool" device and click "Unmount Devices". - -Collect platform information from the device and store in the inventory ------------------------------------------------------------------------ - -In the next step we will execute a workflow that collects platform information from every mounted device, converts the vendor specific information into OpenConfig format and writes the resulting data to the inventory. - -Click on: :menuselection:`Home --> Workflows --> Definitions` - -Then search for the workflow **Read_components_all_from_unified_update_inventory** - - -.. image:: read_all_from_inventory.gif - :target: /_images/read_all_from_inventory.gif - :alt: Workflow Config - - -Once selected, you can execute the workflow without providing additional information. Click on the workflow ID that popped up to see the progress and additional details about this workflow. You should see something similar to this: - - -.. image:: read_all_inv-flow.png - :target: /_images/read_all_inv-flow.png - :alt: Workflow detail - - -After the main and sub-workflows have completed successfully the platform information is now stored in the inventory as a child entry to the device ID that the information comes from. - - -The execution of all workflows can be done manually, via the UI, or can be automated and scheduled via the REST API of conductor server. diff --git a/frinx-machine/use-cases/obtain-platform-inventory-data/read_all_from_inventory.gif b/frinx-machine/use-cases/obtain-platform-inventory-data/read_all_from_inventory.gif deleted file mode 100644 index 6037ea859..000000000 Binary files a/frinx-machine/use-cases/obtain-platform-inventory-data/read_all_from_inventory.gif and /dev/null differ diff --git a/frinx-machine/use-cases/obtain-platform-inventory-data/read_all_inv-flow.png b/frinx-machine/use-cases/obtain-platform-inventory-data/read_all_inv-flow.png deleted file mode 100644 index c7e4dcb86..000000000 Binary files a/frinx-machine/use-cases/obtain-platform-inventory-data/read_all_inv-flow.png and /dev/null differ diff --git a/frinx-machine/use-cases/policy-filter-xr/policy_filter_flow.png b/frinx-machine/use-cases/policy-filter-xr/policy_filter_flow.png deleted file mode 100644 index 340401c1f..000000000 Binary files a/frinx-machine/use-cases/policy-filter-xr/policy_filter_flow.png and /dev/null differ diff --git a/frinx-machine/use-cases/policy-filter-xr/policy_filter_input.png b/frinx-machine/use-cases/policy-filter-xr/policy_filter_input.png deleted file mode 100644 index 502ff8c4c..000000000 Binary files a/frinx-machine/use-cases/policy-filter-xr/policy_filter_input.png and /dev/null differ diff --git a/frinx-machine/use-cases/policy-filter-xr/policy_filter_input_data.png b/frinx-machine/use-cases/policy-filter-xr/policy_filter_input_data.png deleted file mode 100644 index 9a348ba52..000000000 Binary files a/frinx-machine/use-cases/policy-filter-xr/policy_filter_input_data.png and /dev/null differ diff --git a/frinx-machine/use-cases/policy-filter-xr/policy_filter_run.gif b/frinx-machine/use-cases/policy-filter-xr/policy_filter_run.gif deleted file mode 100644 index 82ae6d35b..000000000 Binary files a/frinx-machine/use-cases/policy-filter-xr/policy_filter_run.gif and /dev/null differ diff --git a/frinx-machine/use-cases/policy-filter-xr/policy_filter_search.png b/frinx-machine/use-cases/policy-filter-xr/policy_filter_search.png deleted file mode 100644 index 4ba70caca..000000000 Binary files a/frinx-machine/use-cases/policy-filter-xr/policy_filter_search.png and /dev/null differ diff --git a/frinx-machine/use-cases/policy-filter-xr/readme.md b/frinx-machine/use-cases/policy-filter-xr/readme.md deleted file mode 100644 index bd5b2b76d..000000000 --- a/frinx-machine/use-cases/policy-filter-xr/readme.md +++ /dev/null @@ -1,94 +0,0 @@ -# Policy filter XR - -This workflow uses UniConfig to showcase the filtering capabilities of some of -our system tasks. It filters through the interfaces of the device, returns the -name of the interface based on its user-provided description and applies the -chosen policy on that interface. - -**Supported device**: ios-xr - -This workflow can be tested on the following devices: -**ISOXR653_1, ISOXR653_2, ISOXR663_1** - -When inserting the data into the input, we recommend using -`/Cisco-IOS-XR-ifmgr-cfg:interface-configurations` in the URI. - -For testing purposes, you can use the following: -- Description: `FrinxDescription` -- Policy_map_name: `Custom_policy_map`. - -*Before running this workflow, make sure that the testing device is already -installed.* - -!!!danger -Policy creation is not part of this workflow. The chosen policy must exist on -the device before this workflow is run. -!!! - -## Searching the workflow - -![Search](policy_filter_search.png) - -## Sync & Replace - -For all workflows that interact with devices, we consider it best practice to -start with the tasks **Sync from network** and **Replace config with oper**. -This ensures that the internal databases of FRINX Machine are in sync with the -latest configuration of the device. The input for these tasks is simply the name -of the node (device). - -## Read device data - -The next part is reading the device config. In the -**UNICONFIG_read_structured_device_data** task, you can specify which part of -the config to read with the URI. In this case, we leave the **URI** input field -empty. - -## jsonJQ filter - -jsonJQ is one of the system tasks that is useful for filtering data. We use the -following query expression: - -``` -.["frinx-uniconfig-topology:configuration"]["Cisco-IOS-XR-ifmgr-cfg:interface-configurations"] . "interface-configuration" | select(. != null) | .[] | select(.description == "${workflow.input.Description}") | {interface: ."interface-name"} -``` - -We search through the whole config, and under the -**Cisco-IOS-XR-ifmgr-cfg:interface-configurations** model we find the interface -with a description given by the user. The task returns the name of that -interface. - -## Lambda - -Lambda is a generic task that can process any JS code. In this case, we use it -to parse the output of the jsonJQ task. jsonJQ returns the name of the interface -in a standard decoded format, for example `TenGigE0/0/0/0`. However, as we will -be using that interface in the URI, it must be encoded. You can do this with a -simple JS script: - -``` -{return(encodeURIComponent($.lambdaValue));} -``` - -As an example, we take the interface name `TenGigE0/0/0/0` and encode it to -`TenGigE0%2F0%2F0%2F0`. - -## Write & commit - -Lastly, we use the output of the lambda task for the configuration. We apply a -policy to the interface filtered based on its description. - -## Example input - -![Input](policy_filter_input_data.png) - -## Execution flow - -![Execution Flow](policy_filter_flow.png) - -## Run the workflow - -- device_id: `IOSXR653_1` -- Policy_map_name: `test_map_custom` - -![Running the workflow](run_wf_uniconfig_policy_filter_XR.gif) diff --git a/frinx-machine/use-cases/policy-filter-xr/run_wf_uniconfig_policy_filter_XR.gif b/frinx-machine/use-cases/policy-filter-xr/run_wf_uniconfig_policy_filter_XR.gif deleted file mode 100644 index b3a55932a..000000000 Binary files a/frinx-machine/use-cases/policy-filter-xr/run_wf_uniconfig_policy_filter_XR.gif and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/Save-ExecutionFlow.png b/frinx-machine/use-cases/save-and-run-command/Save-ExecutionFlow.png deleted file mode 100644 index 8a4cd0d22..000000000 Binary files a/frinx-machine/use-cases/save-and-run-command/Save-ExecutionFlow.png and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/Save-FlowOutput.png b/frinx-machine/use-cases/save-and-run-command/Save-FlowOutput.png deleted file mode 100644 index 50741d5b8..000000000 Binary files a/frinx-machine/use-cases/save-and-run-command/Save-FlowOutput.png and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/Save-WorkflowConfig.png b/frinx-machine/use-cases/save-and-run-command/Save-WorkflowConfig.png deleted file mode 100644 index 6b413b70f..000000000 Binary files a/frinx-machine/use-cases/save-and-run-command/Save-WorkflowConfig.png and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/execute_rpc_config.png b/frinx-machine/use-cases/save-and-run-command/execute_rpc_config.png deleted file mode 100644 index b4d1129de..000000000 Binary files a/frinx-machine/use-cases/save-and-run-command/execute_rpc_config.png and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/execute_rpc_flow.png b/frinx-machine/use-cases/save-and-run-command/execute_rpc_flow.png deleted file mode 100644 index 5a3b92ab4..000000000 Binary files a/frinx-machine/use-cases/save-and-run-command/execute_rpc_flow.png and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/execute_rpc_output.png b/frinx-machine/use-cases/save-and-run-command/execute_rpc_output.png deleted file mode 100644 index 143f0ca19..000000000 Binary files a/frinx-machine/use-cases/save-and-run-command/execute_rpc_output.png and /dev/null differ diff --git a/frinx-machine/use-cases/save-and-run-command/save-and-run-command.rst b/frinx-machine/use-cases/save-and-run-command/save-and-run-command.rst deleted file mode 100644 index 5517da791..000000000 --- a/frinx-machine/use-cases/save-and-run-command/save-and-run-command.rst +++ /dev/null @@ -1,98 +0,0 @@ - -Save and execute commands on devices -==================================== - -In this section you will see how users can save a command to inventory and run it on devices. The command in this case is platform specific. - -The goal of this use case is to execute a saved command on devices and save to output in the inventory. - -.. important:: - - Make sure you didn't skip :ref:`mounting all devices in inventory `, otherwise this workflow might not work correctly. - - -Save a command to inventory ---------------------------- - -In the next step we will execute a workflow that saves a command to inventory under a specific id. - -Click on: :menuselection:`Home --> Workflows --> Definitions` - -Then search for the workflow: **Add_cli_command_template_to_inventory** - - -.. image:: Save-WorkflowConfig.png - :target: /_images/Save-WorkflowConfig.png - :alt: Workflow Configuration - - - -.. code-block:: text - - template_id: sh_run - command: show running-config - - -Once you fill in the id and the command you want to save, continue to execute the workflow. - - -Click on the workflow ID that popped up and click again to see it's details. You can see the progress of the workflow, input/output data of each task and statistics associated with the workflow execution. - - -.. image:: aSave-ExecutionFlow.png - :target: /_images/Save-ExecutionFlow.png - :alt: Workflow Execution - -.. image:: Save-FlowOutput.png - :target: /_images/Save-FlowOutput.png - :alt: Task detail - - -After the successful completion of the workflow the command is saved in the inventory. To see the inventory, go to :menuselection:`Home --> Inventory` - - -Execute saved command on mounted devices ----------------------------------------- - -In the next step we will execute the saved command on a device and obtain the running configuration which we then save to the inventory. - -To run the command on one device in the inventory use **Execute_and_read_rpc_cli_device_from_inventory**. - -To run the command on all mounted devices in the inventory while simultaneously updating the inventory itself use **Execute_all_from_cli_update_inventory** - -To execute a command on one device and also update the inventory, you would use **Execute_and_read_rpc_cli_device_from_inventory_update_inventory**. - -In our example we will use **Execute_and_read_rpc_cli_device_from_inventory** which will execute a command from inventory on one device without saving the output of this command to inventory. - - -Click on :menuselection:`Home --> Workflows --> Definitions` and find **Execute_and_read_rpc_cli_device_from_inventory**. - - -.. image:: execute_rpc_config.png - :target: /_images/execute_rpc_config.png - :alt: Workflow input - -.. code-block:: text - - command_id: sh_run - device_id: IOS01 - params: (leave blank) - -After specifying the device id, the command id, and the input parameters (in our case empty: {}) you can run the workflow. - - -Look at the progress of the workflow by clicking on the workflow ID that popped up and click again to see it's details. Click "Execution Flow". Now you can see the progress of the workflow, input/output data of each task and statistics associated with the workflow execution. - - -.. image:: execute_rpc_flow.png - :target: /_images/execute_rpc_flow.png - :alt: Workflow diagram - -Click the green box with "execute_template" written inside it and click "Unescape" to unescape the Output. You should see the output of the command which shows you the running configuration such as: - - -.. image:: execute_rpc_output.png - :target: /_images/execute_rpc_output.png - :alt: Task detail - -The execution of all workflows can be done manually, via the UI, or can be automated and scheduled via the REST API of conductor server. diff --git a/frinx-resource-manager/index.yaml b/frinx-resource-manager/index.yaml index e2e7b1200..a4363d8d9 100644 --- a/frinx-resource-manager/index.yaml +++ b/frinx-resource-manager/index.yaml @@ -1,3 +1,3 @@ -Label: Frinx Resource Manager -icon: gear +Label: Resource Manager +icon: database order: 1000 \ No newline at end of file diff --git a/frinx-resource-manager/introduction/readme.md b/frinx-resource-manager/introduction/readme.md index edc8d2625..1d640e624 100644 --- a/frinx-resource-manager/introduction/readme.md +++ b/frinx-resource-manager/introduction/readme.md @@ -88,26 +88,21 @@ need for modifications. To achieve flexibility we are allowing: - Custom pool grouping to represent logical network parts (subnet, region, datacenter etc.) -### Multitenancy and RBAC +### RBAC -Multitenancy and Role Based Access Control is supported by Resource Manager. +Role Based Access Control is supported by Resource Manager. A simple RBAC model is implemented where only super-users (based on their role and user groups) can manipulate resource types, resource pools and labels. Regular users will only be able to read the above entities, allocate and free resources. -Resource Manager does not manage list tenants/users/roles/groups and relies +Resource Manager does not manage list users/roles/groups and relies on external ID provider. Following headers are expected by Resource Manager graphQL server: ``` - x-tenant-id: name or ID of a tenant. This name is also used as part of PSQL DB instance name. from: name or ID of current user. x-auth-user-roles: list of roles associated with current user. x-auth-user-groups: list of groups associated with current user. ``` - -Resource Manager does not store any information about users or tenants in the -database, **except the name or ID of a tenant provided in `x-tenant-id` -header**. diff --git a/frinx-resource-manager/user-guide/readme.md b/frinx-resource-manager/user-guide/readme.md index 8875a39e0..51fc02ac7 100644 --- a/frinx-resource-manager/user-guide/readme.md +++ b/frinx-resource-manager/user-guide/readme.md @@ -7,9 +7,7 @@ order: 4 ## API See examples in -[api\_tests](https://github.com/FRINXio/resource-manager/tree/master/api-tests) -or a VRF IP management sample use case in [postman -collection](https://www.getpostman.com/collections/514d68c6e43f1628d715). +[api\_tests](https://github.com/FRINXio/resource-manager/tree/master/api-tests). ## UI diff --git a/frinx-workflow-manager/create-and-modify-workflows/fm_search_integers_task.png b/frinx-workflow-manager/create-and-modify-workflows/fm_search_integers_task.png deleted file mode 100644 index 78a980f5d..000000000 Binary files a/frinx-workflow-manager/create-and-modify-workflows/fm_search_integers_task.png and /dev/null differ diff --git a/frinx-workflow-manager/create-and-modify-workflows/readme.md b/frinx-workflow-manager/create-and-modify-workflows/readme.md deleted file mode 100644 index 4093a340d..000000000 --- a/frinx-workflow-manager/create-and-modify-workflows/readme.md +++ /dev/null @@ -1,293 +0,0 @@ -# Create and Modify Workflows and Workers - -## Prepare Your Work Environment - -After you have installed and started the FRINX Machine (see -"") you will want to modify -existing workflows or add new workflows and workers to meet your needs. -We will be referring to the machine that is running the FRINX Machine -containers as host. Typically that host is a VM running on your laptop, -in your private cloud or in a public/virtual private cloud. Here is how -to get started. - -## Creating a worker - -Now that we have our environment prepared, we can move on to the first -step of creating a workflow. First we will create a worker that defines -the tasks utilized in our workflow. The goal is to have the task in our -workflow receive two input parameters (id\_1 and id\_2). The purpose of -our task is to add the two input variables and return the result. The -execution logic of our task will be implemented in a small python -function called worker. - -For a full documentation of tasks, workflows and the capabilities of -Netflix Conductor, please go to -[](https://netflix.github.io/conductor/) - -Create a worker in a correct repository (name of the worker is up to -you): - -``` -~/FRINX-machine/fm-workflows/demo-workflows/workers$ touch add_integers_worker.py -``` - -This is what we put in the file in our case: - -```python -from __future__ import print_function - - -def execute_add_two_integers(task): - addend_one = task['inputData']['id_1'] - addend_two = task['inputData']['id_2'] - result = int(addend_one) + int(addend_two) - return {'status': 'COMPLETED', 'output': {'result': result}, 'logs': []} - -def start(cc): - print('Starting add_two_integers worker') - cc.register('add_two_integers', { - "name": "add_two_integers", - "retryCount": 0, - "timeoutSeconds": 30, - "inputKeys": [ - "id_1", - "id_2" - ], - "timeoutPolicy": "TIME_OUT_WF", - "retryLogic": "FIXED", - "retryDelaySeconds": 0, - "responseTimeoutSeconds": 30 - } - ) - cc.start('add_two_integers', execute_add_two_integers, False) -``` - -Core of the worker is a task that contains simple method which does -addition with two inputs which user provides in GUI as you will see -later. Workers can have multiple tasks within itself, in our case one is -enough as an example. - -After this, you must register your worker in the main python file -"main.py" in the same directory where you just created your worker. All -workers you want to use in Frinx Machine must be included in this file. -File might look similar to this: - -```python # - import time - import worker_wrapper - from frinx_rest import conductor_url_base - import inventory_worker - import lldp_worker - import platform_worker - import vll_worker - import unified_worker - import vll_service_worker - import vpls_worker - import vpls_service_worker - import bi_service_worker - import common_worker - import psql_worker - from import_workflows import import_workflows - import cli_worker - import netconf_worker - import uniconfig_worker - import http_worker - from importDevices import import_devices - import os - import add_integers_worker - - - workflows_folder_path = '../workflows' - healtchchek_file_path = '../healthcheck' - - def main(): - if os.path.exists(healtchchek_file_path): - os.remove(healtchchek_file_path) - - - print('Starting FRINX workers') - cc = worker_wrapper.ExceptionHandlingConductorWrapper(conductor_url_base, 1, 1) - register_workers(cc) - import_workflows(workflows_folder_path) - import_devices("../devices/cli_device_data.csv", "../devices/cli_device_import.json") - import_devices("../devices/netconf_device_data.csv", "../devices/netconf_device_import.json") - - with open(healtchchek_file_path, 'w'): pass - - # block - while 1: - time.sleep(1000) - - - def register_workers(cc): - platform_worker.start(cc) - lldp_worker.start(cc) - inventory_worker.start(cc) - unified_worker.start(cc) - psql_worker.start(cc) - add_integers_worker.start(cc) - # vll_worker.start(cc) - # vll_service_worker.start(cc) - # vpls_worker.start(cc) - # vpls_service_worker.start(cc) - # bi_service_worker.start(cc) - - - if __name__ == '__main__': - main() -``` - -Notice lines **22** and **53**, you must import both the worker file -and include it in "register\_workers(cc)" method. - -That is all in terms of worker creation. There is however few more -things to do in your environment. After doing all the above, we will -want to build our Frinx Machine based on our local changes. For that we -must edit the file "swarm-fm-workflow.yml" - -``` -~/FRINX-machine/fm-workflows/composefiles$ ls - -swarm-fm-workflows.yml -``` - -Find block "demo-workflows" in this file. Change the image to use a -image called "local" (2): - -``` # -demo-workflows: - image: frinx/demo-workflows:local - logging: - driver: "json-file" - options: - max-file: "3" - max-size: "10m" - environment: - - UNICONFIG_URL_BASE=https://${CONSTRAINT_HOSTNAME}_uniconfig:8181/rests - healthcheck: - test: cat /home/app/healthcheck - interval: 10s - timeout: 5s - retries: 5 - start_period: 10s - deploy: - # placement: - # constraints: - # - node.hostname == ${CONSTRAINT_HOSTNAME} - mode: replicated - replicas: 1 -``` - -Now we can build our fm-workflows image with the added task. Use: - -``` -~/FRINX-machine/fm-workflows$ -docker build --no-cache -f demo-workflows/Dockerfile -t frinx/demo-workflows:local ./demo-workflows/ -``` - -!!!danger -While it is not necessary to use "--no-cache" flag, we recommend it to -make sure you rebuild the image with newly edited code and not the one -stored in cache memory. -!!! - -Now just start fm-workflows and you're good to go: - -``` -~/FRINX-machine/fm-workflows$ -./startup.sh -``` - -If you did everything correctly, you will now see your new task in Frinx -Machine. Go to **Workflow Manager -> Tasks -> Search**: - -![Search integers](fm_search_integers_task.png) - -Now you can create workflow that uses this task. **Workflow Manager** -> **"+ New"**: - -[!embed](https://www.youtube.com/embed/dB_yR1GhBGU) - -### After being prompted for inputs, you should see that addition ran successfully: - -![Search integers](successful_workflow_addition.png) - -```json -{ - "taskType": "add_two_integers", - "status": "COMPLETED", - "inputData": { - "id_1": "6", - "id_2": "5" - }, - "referenceTaskName": "add_two_integers_ref_XCFR", - "retryCount": 0, - "seq": 1, - "pollCount": 1, - "taskDefName": "add_two_integers", - "scheduledTime": 1607707042557, - "startTime": 1607707043195, - "endTime": 1607707043237, - "updateTime": 1607707043196, - "startDelayInSeconds": 0, - "retried": false, - "executed": true, - "callbackFromWorker": true, - "responseTimeoutSeconds": 30, - "workflowInstanceId": "1fcf782c-1cd6-4219-a6eb-e9d218de8b80", - "workflowType": "Add_two_integers", - "taskId": "9b88a65e-9869-420c-bd05-d42963948a39", - "callbackAfterSeconds": 0, - "workerId": "b5592d30c747", - "outputData": { - "result": 11 - }, - "workflowTask": { - "name": "add_two_integers", - "taskReferenceName": "add_two_integers_ref_XCFR", - "inputParameters": { - "id_1": "${workflow.input.id_1}", - "id_2": "${workflow.input.id_2}" - }, - "type": "SIMPLE", - "decisionCases": {}, - "defaultCase": [], - "forkTasks": [], - "startDelay": 0, - "joinOn": [], - "optional": false, - "taskDefinition": { - "createTime": 1607703392256, - "createdBy": "", - "name": "add_two_integers", - "retryCount": 0, - "timeoutSeconds": 30, - "inputKeys": [ - "id_1", - "id_2" - ], - "outputKeys": [], - "timeoutPolicy": "TIME_OUT_WF", - "retryLogic": "FIXED", - "retryDelaySeconds": 0, - "responseTimeoutSeconds": 30, - "inputTemplate": {}, - "rateLimitPerFrequency": 0, - "rateLimitFrequencyInSeconds": 1 - }, - "defaultExclusiveJoinTask": [], - "asyncComplete": false, - "loopOver": [] - }, - "rateLimitPerFrequency": 0, - "rateLimitFrequencyInSeconds": 1, - "workflowPriority": 0, - "iteration": 0, - "taskDefinition": { - "present": true - }, - "loopOverTask": false, - "taskStatus": "COMPLETED", - "queueWaitTime": 638, - "logs": [] -} -``` \ No newline at end of file diff --git a/frinx-workflow-manager/create-and-modify-workflows/successful_workflow_addition.png b/frinx-workflow-manager/create-and-modify-workflows/successful_workflow_addition.png deleted file mode 100644 index 88998d625..000000000 Binary files a/frinx-workflow-manager/create-and-modify-workflows/successful_workflow_addition.png and /dev/null differ diff --git a/frinx-workflow-manager/index.yaml b/frinx-workflow-manager/index.yaml index 924abf20d..c11eda523 100644 --- a/frinx-workflow-manager/index.yaml +++ b/frinx-workflow-manager/index.yaml @@ -1,3 +1,3 @@ -Label: Frinx Workflow Manager -icon: gear +Label: Workflow Manager +icon: workflow order: 2000 \ No newline at end of file diff --git a/frinx-workflow-manager/introduction/readme.md b/frinx-workflow-manager/introduction/readme.md index 9438ac164..75911ab4a 100644 --- a/frinx-workflow-manager/introduction/readme.md +++ b/frinx-workflow-manager/introduction/readme.md @@ -20,6 +20,6 @@ FRINX Machine. FRINX Workflow Manager uses Netflix's Conductor for task/workflow orchestration. We recommend to take a look at their -[Documentation](https://netflix.github.io/conductor/configuration/taskdef/) +[Documentation](https://docs.conductor-oss.org/devguide/concepts/index.html) as an introduction to Tasks, Workflows, Definitions and an overall prerequisite to working with FRINX Workflow Manager. diff --git a/frinx-workflow-manager/python-sdk/development/readme.md b/frinx-workflow-manager/python-sdk/development/readme.md new file mode 100644 index 000000000..439686506 --- /dev/null +++ b/frinx-workflow-manager/python-sdk/development/readme.md @@ -0,0 +1,97 @@ +--- +order: 1000 +label: Developent environment +--- + +# Development environment + +This guide provides the step-by-step instructions for preparing develoment environment. + +## Prerequisites + +- `Cluster`: Make sure that you cluster is running. +- `Helm`: Make sure that Helm is installed. Follow the Helm installation guide if necessary. +- `Python`: Make sure that your environment have Python ^3.10 interpretter installed. +- `Poetry`: Make sure that you have installed Poetry. Follow the Poetry installation guide if necessary. + +## Step 1: Start Frinx Machine + +Install Frinx Machine with ingress enabled [!ref icon="briefcase"](/frinx-machine/installation/customization/frinx-machine-customization/readme.md) + + +Check out gitops-boilerplate repository to run Frinx Machine locally [!ref target="blank" icon="mark-github" text="frinx-workers-boilerplate"](https://github.com/FRINXio/gitops-boilerplate) + +Make sure you have enabled workflow-manager and krakend ingress, because it's required for local development. + +```bash +frinx-machine: + krakend: + ingress:s + enabled: true + className: nginx + annotations: + # force-ssl-redirect must be disabled in case you are using a self-signed certificate + # nginx.ingress.kubernetes.io/force-ssl-redirect: "true" + nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600" + nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" + nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" + hosts: + - host: krakend.127.0.0.1.nip.io + paths: + - path: "/" + pathType: ImplementationSpecific + + workflow-manager: + ingress: + enabled: true + hosts: + - host: workflow-manager.127.0.0.1.nip.io + paths: + - path: "/" + pathType: ImplementationSpecific + + uniconfig: + image: + repository: "frinxio/uniconfig" + + performance-monitor: + image: + repository: "frinxio/performance-monitor" +``` + +In case, you using minikube, get minikube ip + +```bash +minikube ip + +192.168.49.2 +``` + +add map that ip with ingres hosts to your /etc/hosts + +```bash +#/etc/hosts +192.168.49.2 krakend.127.0.0.1.nip.io +192.168.49.2 workflow-manager.127.0.0.1.nip.io +``` + +To verify ingresses + +```bash +$kubectl get ingress -n frinx + +NAME CLASS HOSTS ADDRESS PORTS AGE +conductor-server nginx workflow-manager.127.0.0.1.nip.io 192.168.49.2 80 161m +krakend nginx krakend.127.0.0.1.nip.io 192.168.49.2 80 161m +``` + +## Step 2: Clone worker-example repository + +Clone repository and follow instructions in README.md + +[!ref target="blank" icon="mark-github" text="frinx-workers-boilerplate"](https://github.com/FRINXio/frinx-workers-boilerplate) + + +## Step 3: Deploy to cluster + +[!ref](/frinx-machine/installation/custom-worker-deployment/readme.md) diff --git a/frinx-workflow-manager/python-sdk/frinx-python-sdk/readme.md b/frinx-workflow-manager/python-sdk/frinx-python-sdk/readme.md new file mode 100644 index 000000000..8a080f3dd --- /dev/null +++ b/frinx-workflow-manager/python-sdk/frinx-python-sdk/readme.md @@ -0,0 +1,232 @@ +--- +order: 4000 +label: Frinx Python SDK +--- + +Frinx Python SDK +========================== + +The FRINX Python SDK is a flexible tool designed to simplify interaction with the FRINX network automation solutions. +This SDK provides a set of Python libraries and utilities that enable developers to easily integrate with FRINX's platform, +streamline their network automation workflows, and leverage FRINX's capabilities to the fullest. + +## Project init + +For the beggining, you will need to create new python project. +Example of simple project can be found [here](https://github.com/FRINXio/frinx-python-sdk/tree/main/examples/simple_worker) +We recommend to use poetry as a package management tool and use package from [pypi.org](https://pypi.org/project/frinx-python-sdk/) + + +### pyproject.toml + +```bash +# pyproject.toml +[tool.poetry] +name = "example_worker" +version = "0.0.1" +description = "Example of worker implementation" + +[tool.poetry.dependencies] +python = "^3.10" +frinx-python-sdk = "^2" + +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" +``` + +### Worker definition + +In worker.py we want to create some piece of logic, which will be used in workflows to solve complex problems. +Our worker has predefined interface, where you can define conductor task definitions, inputs and outpus. +In method execute, you can implement your custom logic. For example call to our service, parse data from previous task or any other code you want. + + +```python +# workers.test_worker.py +# imports from frinx-python-sdk +from frinx.common.conductor_enums import TaskResultStatus +from frinx.common.type_aliases import ListAny +from frinx.common.worker.service import ServiceWorkersImpl +from frinx.common.worker.task_def import TaskDefinition +from frinx.common.worker.task_def import TaskInput +from frinx.common.worker.task_def import TaskOutput +from frinx.common.worker.task_result import TaskResult +from frinx.common.worker.worker import WorkerImpl + +# ServiceWorkersImpl: Group workers based on purpose to one class. Then all workers can be registered together +class TestWorkers(ServiceWorkersImpl): + + # WorkerImpl: Specific worker implementation with predefined interfaces + class Echo(WorkerImpl): + + # TaskDefinition: Define conductor task parameters + # https://docs.conductor-oss.org/documentation/configuration/taskdef.html + class WorkerDefinition(TaskDefinition): + name: str = 'TEST_echo' + description: str = 'testing purposes: returns input unchanged' + labels: ListAny = ['TEST'] + timeout_seconds: int = 60 + response_timeout_seconds: int = 60 + + # TaskInput: Define conductor task input parameters + # These parameters are validated via pydantic before execute method + class WorkerInput(TaskInput): + input: str + + # TaskOutput: Define conductor task output values + class WorkerOutput(TaskOutput): + output: str + + # Execute method contain task logic. + # Method got input parameters and returning TaskResult object + def execute(self, worker_input: WorkerInput) -> TaskResult[WorkerOutput]: + print(worker_input.input) + return TaskResult( + status=TaskResultStatus.COMPLETED, + logs=['Echo worker invoked successfully'], + output=self.WorkerOutput(output=worker_input.input) + ) + +``` + +### Workflow definition + +Now let's use previous created worker in workflow. + +```python +# workers.test_workflow.py +# imports from frinx-python-sdk +from frinx.common.conductor_enums import TimeoutPolicy +from frinx.common.type_aliases import ListStr +from frinx.common.workflow.service import ServiceWorkflowsImpl +from frinx.common.workflow.task import SimpleTask +from frinx.common.workflow.task import SimpleTaskInputParameters +from frinx.common.workflow.workflow import FrontendWFInputFieldType +from frinx.common.workflow.workflow import WorkflowImpl +from frinx.common.workflow.workflow import WorkflowInputField + + +# ServiceWorkflowsImpl: Group workflows based on purpose to one class. Then all workflows can be registered together +class TestWorkflows(ServiceWorkflowsImpl): + + # WorkflowImpl: Specific workflow implementation with predefined interfaces + # https://docs.conductor-oss.org/documentation/configuration/workflowdef/index.html + class TestWorkflow(WorkflowImpl): + name: str = 'Test_workflow' + version: int = 1 + description: str = 'Test workflow built from test workers' + labels: ListStr = ['TEST'] + timeout_seconds: int = 60 * 5 + timeout_policy: TimeoutPolicy = TimeoutPolicy.TIME_OUT_WORKFLOW + + # WorkflowImpl.WorkflowInput: Define workflow inputs + # WorkflowInputField: Define, how input will be rendered in UI during execution + class WorkflowInput(WorkflowImpl.WorkflowInput): + text: WorkflowInputField = WorkflowInputField( + name='text', + frontend_default_value='hello world', + description='How many seconds to sleep during the workflow', + type=FrontendWFInputFieldType.STRING, + ) + + # WorkflowImpl.WorkflowOutput: Define workflow outputs + # Parameters in workflowOutput are still strings, because they keep only + # reference to task output + class WorkflowOutput(WorkflowImpl.WorkflowOutput): + text: str + + # workflow_builder method helps you build workflow JSON representation via code + # you can make any logic here to handle various situation + # for more info, how to create workflows see workflow-builder docs + # https://docs.conductor-oss.org/devguide/labs/first-workflow.html + def workflow_builder(self, workflow_inputs: WorkflowInput) -> None: + + echo_task = SimpleTask( + name=TestWorkers.Echo, + task_reference_name='echo', + input_parameters=SimpleTaskInputParameters( + root=dict( + input=workflow_inputs.text.wf_input() + ) + ) + ) + self.tasks.append(echo_task) + + self.output_parameters = self.WorkflowOutput( + text=echo_task.output_ref('output') + ) + +``` + +### Start worker + +Now implement conductor client, which registers our workflow and executes custom worker logic. + +```python +# main.py +import logging +import os + +# Import conductor client +from frinx.client.v2.frinx_conductor_wrapper import FrinxConductorWrapper +from frinx.common.logging import logging_common +from frinx.common.logging.logging_common import LoggerConfig +from frinx.common.logging.logging_common import Root + + +# Register your tasks +def register_tasks(conductor_client: FrinxConductorWrapper) -> None: + logging.info('Register tasks') + from workers.test_worker import TestWorkers + TestWorkers().register(conductor_client=conductor_client) + +# Register your workflows +def register_workflows() -> None: + logging.info('Register workflows') + from workers.test_workflow import TestWorkflows + TestWorkflows().register(overwrite=True) + + +def main() -> None: + + # Enable logging + logging_common.configure_logging( + LoggerConfig( + root=Root( + level=os.environ.get('LOG_LEVEL', 'INFO').upper(), + handlers=['console'] + ) + ) + ) + + # Enable prometheus metrics + from frinx.common.telemetry.metrics import Metrics + from frinx.common.telemetry.metrics import MetricsSettings + + Metrics(settings=MetricsSettings(metrics_enabled=True)) + + # Register conductor client + from frinx.common.frinx_rest import CONDUCTOR_HEADERS + from frinx.common.frinx_rest import CONDUCTOR_URL_BASE + + conductor_client = FrinxConductorWrapper( + server_url=CONDUCTOR_URL_BASE, + # Define polling interval for conductor client. + # How often client will ask conductor for task to execute + polling_interval=float(os.environ.get('CONDUCTOR_POLL_INTERVAL', 0.1)), + # Number of parallel threads, which can execute tasks + max_thread_count=int(os.environ.get('CONDUCTOR_THREAD_COUNT', 50)), + headers=dict(CONDUCTOR_HEADERS), + ) + + register_tasks(conductor_client) + register_workflows() + conductor_client.start_workers() + + +if __name__ == '__main__': + main() + +``` + diff --git a/frinx-workflow-manager/python-sdk/index.md b/frinx-workflow-manager/python-sdk/index.md new file mode 100644 index 000000000..50f247523 --- /dev/null +++ b/frinx-workflow-manager/python-sdk/index.md @@ -0,0 +1,11 @@ +--- +label: Python SDK +icon: briefcase +order: 2000 +--- + +# Frinx Python SDK + +In this section, you will learn about Frinx Python SDK and how to use it together with prepared frinx-services-python-api/frinx-services-python-workers. + +![High-level Python SDK architecture](sdk.png) diff --git a/frinx-workflow-manager/python-sdk/sdk.png b/frinx-workflow-manager/python-sdk/sdk.png new file mode 100644 index 000000000..c70747a7f Binary files /dev/null and b/frinx-workflow-manager/python-sdk/sdk.png differ diff --git a/frinx-workflow-manager/python-sdk/services-python-api/readme.md b/frinx-workflow-manager/python-sdk/services-python-api/readme.md new file mode 100644 index 000000000..6843ee5cc --- /dev/null +++ b/frinx-workflow-manager/python-sdk/services-python-api/readme.md @@ -0,0 +1,130 @@ +--- +order: 4000 +label: Frinx Servives Python API +--- + +Frinx Servives Python API +========================== + +The FRINX Services Python API repository is a monorepo containing Pydantic API wrappers. +These components are designed to facilitate rapid worker development, enabling developers to use a common source of API and track changes between service releases. +If you find any incompatibilities, please create an issue on the GitHub repository. + +For more details, visit the [FRINX Services Python API documentation](https://github.com/FRINXio/frinx-services-python-api). + +### Package Importing + +To import the necessary FRINX API modules into your project, add the following entries to your pyproject.toml file: + +``` bash +#pyproject.toml +[tool.poetry] +name = "example-project" +version = "0.1.0" +description = "" +readme = "README.md" +packages = [] + +[tool.poetry.dependencies] +python = "^3.10" +pydantic = "^2" +requests = "^2.31.0" +frinx-python-sdk = "^2" +frinx-uniconfig-api = { git = "https://github.com/FRINXio/frinx-services-python-api.git", tag = "6.1.0", subdirectory = "uniconfig/python" } + +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" +``` + +Each package can be imported via frinx_api. Below is an example of how to create custom worker with imported API module: + +### Example Usage + +CreateTransaction API wrapper can be found in the [Uniconfig](https://github.com/FRINXio/frinx-services-python-api/blob/main/uniconfig/python/frinx_api/uniconfig/rest_api.py) module. + +In this example, the frinx-uniconfig-api dependency is imported into the project, and the CreateTransaction module is used to manage Uniconfig transactions. +This approach allows you to use generated API instead of creating custom API implementation. + +```python +import requests +from frinx.common.frinx_rest import UNICONFIG_HEADERS +from frinx.common.frinx_rest import UNICONFIG_URL_BASE +from frinx.common.worker.task_def import TaskDefinition +from frinx.common.worker.task_def import TaskExecutionProperties +from frinx.common.worker.task_def import TaskInput +from frinx.common.worker.task_def import TaskOutput +from frinx.common.worker.task_result import TaskResult +from frinx.common.worker.task_result import TaskResultStatus +from frinx.common.worker.worker import WorkerImpl + + +class CreateTransaction(WorkerImpl): + from frinx_api.uniconfig.rest_api import CreateTransaction as UniconfigApi + + class ExecutionProperties(TaskExecutionProperties): + exclude_empty_inputs: bool = True + transform_string_to_json_valid: bool = True + + class WorkerDefinition(TaskDefinition): + name: str = "UNICONFIG_Create_transaction_RPC" + description: str = "Create Uniconfig transaction" + + class WorkerInput(TaskInput): + transaction_timeout: int | None = None + use_dedicated_session: bool = False + uniconfig_url_base: str = UNICONFIG_URL_BASE + + class WorkerOutput(TaskOutput): + transaction_id: str | None = None + uniconfig_server_id: str | None = None + uniconfig_url_base: str + + def execute(self, worker_input: WorkerInput) -> TaskResult[WorkerOutput]: + if self.UniconfigApi.request is None: + raise Exception(f"Failed to create request {self.UniconfigApi.request}") + + response = requests.request( + url=worker_input.uniconfig_url_base + self.UniconfigApi.uri, + method=self.UniconfigApi.method, + data=class_to_json(self.UniconfigApi.request()), + headers=dict(UNICONFIG_HEADERS), + ) + + if not response.ok: + return TaskResult( + status=TaskResultStatus.FAILED, + logs=response.content.decode("utf8"), + output=self.WorkerOutput(uniconfig_url_base=worker_input.uniconfig_url_base), + ) + + return TaskResult( + status=TaskResultStatus.COMPLETED, + output=self.WorkerOutput( + transaction_id=response.cookies["UNICONFIGTXID"], + uniconfig_server_id=response.cookies.get("uniconfig_server_id", None), + uniconfig_url_base=worker_input.uniconfig_url_base, + ) + ) + +``` + +In this example, we used frinx-python-sdk and frinx-services-python-api to create conductor worker compatible with Frinx Machine 6.1.0 release. +For more details, how to import and execute this worker, please visit SDK part. + +### Versioning + +We release versions for each component change or as a combination of services based on the FRINX Machine release. +If you have deployed FRINX Machine 6.1.0, use tag 6.1.0. For specific dependency versions, use the custom tag/branch/revision as shown below: + + +```bash +[tool.poetry.dependencies] +... +frinx-inventory-api = {git = "ssh://git@github.com/FRINXio/frinx-services-python-api.git", tag = "frinx-inventory-api_v2.2.0", subdirectory = "frinx-inventory-server/python"} +... +``` + +By following these instructions, you can quickly integrate FRINX Services Python API into your project and start utilizing the provided functionalities to streamline your worker development. + +### \ No newline at end of file diff --git a/frinx-workflow-manager/python-sdk/services-python-workers/readme.md b/frinx-workflow-manager/python-sdk/services-python-workers/readme.md new file mode 100644 index 000000000..2c33cf561 --- /dev/null +++ b/frinx-workflow-manager/python-sdk/services-python-workers/readme.md @@ -0,0 +1,98 @@ +--- +order: 3000 +label: Frinx Servives Python Workers +--- + +Frinx Servives Python Workers +========================== + +The FRINX Services Python Workers repository contains a collection of commonly used workers and API wrappers. +These components are designed to facilitate rapid workflow development, enabling developers to share tasks across projects and avoid redundant work. +Contributions are welcome to help expand and improve the repository. + +For more details, visit the [FRINX Services Python Workers documentation](https://github.com/FRINXio/frinx-services-python-workers). + +### Package Importing + +To import the necessary FRINX services into your project, add the following entries to your pyproject.toml file: + +``` bash +#pyproject.toml +[tool.poetry] +name = "example-project" +version = "0.1.0" +description = "" +readme = "README.md" +packages = [] + +[tool.poetry.dependencies] +python = "^3.10" +frinx-inventory-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "inventory/python"} +frinx-schellar-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "schellar/python"} +frinx-uniconfig-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "uniconfig/python"} +frinx-resource-manager-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "resource-manager/python"} +frinx-topology-discovery-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "topology-discovery/python"} +frinx-http-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "misc/python/http"} +frinx-kafka-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "misc/python/kafka"} +frinx-influxdb-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "misc/python/influxdb"} +frinx-conductor-system-test-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "misc/python/conductor-system-test"} +frinx-python-lambda = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "6.1.0", subdirectory = "misc/python/python-lambda"} + +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" +``` + +Each package can be imported via frinx_worker. Below is an example of how to use the UniconfigManager from the frinx-uniconfig-worker package: + +### Example Usage + +UniconfigManager worker implementation can be found in the [UniconfigManager](https://github.com/FRINXio/frinx-services-python-workers/blob/6.1.0/uniconfig/python/frinx_worker/uniconfig/uniconfig_manager.py) module. + +In this example, the frinx-uniconfig-worker dependency is imported into the project, and the UniconfigManager module is used to manage Uniconfig transactions. +This approach allows you to use pre-built tasks instead of creating custom workers. + +```python +from frinx_worker.uniconfig.uniconfig_manager import UniconfigManager + +... + + def workflow_builder(self, workflow_inputs: WorkflowInput) -> None: + tx_start = SimpleTask( + name=UniconfigManager.CreateTransaction, + task_reference_name="tx_start", + input_parameters=SimpleTaskInputParameters(root=dict(uniconfig_url_base=workflow_inputs.zone.wf_input)), + ) + + commit_tx = SimpleTask( + name=UniconfigManager.CommitTransaction, + task_reference_name="tx_commit", + input_parameters=SimpleTaskInputParameters( + root=dict( + uniconfig_url_base=tx_start.output_ref("uniconfig_url_base"), + uniconfig_server_id=tx_start.output_ref("uniconfig_server_id"), + transaction_id=tx_start.output_ref("transaction_id"), + ) + ), + ) +... + +``` + + +### Versioning + +We release versions for each component change or as a combination of services based on the FRINX Machine release. +If you have deployed FRINX Machine 6.1.0, use tag 6.1.0. For specific dependency versions, use the custom tag/branch/revision as shown below: + + +```bash +[tool.poetry.dependencies] +... +frinx-inventory-worker = {git = "ssh://git@github.com/FRINXio/frinx-services-python-workers.git", tag = "frinx-inventory-worker_v1.0.1", subdirectory = "inventory/python"} +... +``` + +By following these instructions, you can quickly integrate FRINX Services Python Workers into your project and start utilizing the provided functionalities to streamline your workflow development. + +### \ No newline at end of file diff --git a/frinx-workflow-manager/python-sdk/workflow-builder/readme.md b/frinx-workflow-manager/python-sdk/workflow-builder/readme.md new file mode 100644 index 000000000..c413872f4 --- /dev/null +++ b/frinx-workflow-manager/python-sdk/workflow-builder/readme.md @@ -0,0 +1,629 @@ +--- +order: 2000 +label: Python workflow builder +--- + +Workflow builder +========================== + +In this part, you can find common implementation of various conductor tasks. +It's pythonic representation of [conductor operators and system tasks](https://docs.conductor-oss.org/documentation/configuration/workflowdef/index.html) +This implementation depends on frinx-python-sdk and uses Pydantic to serialize workflow definition to result in JSON format. + +### DECISION TASK + ++++ Case-Expression decision task + +```python +self.tasks.append(DecisionTask( + name="decision", + task_reference_name="decision", + decision_cases={ + "true": [ + HumanTask( + name="human", + task_reference_name="human" + ) + ], + }, + default_case=[ + TerminateTask( + name="terminate", + task_reference_name="terminate", + input_parameters=TerminateTaskInputParameters( + termination_status=WorkflowStatus.FAILED + )) + ], + input_parameters=DecisionTaskInputParameters( + status="${workflow.input.status}" + ), + case_expression="$.status === 'true' ? 'true' : 'false'" +)) +``` ++++ Case-Value decision task + +```python +self.tasks.append(DecisionCaseValueTask( + name="decision", + task_reference_name="decision", + decision_cases={ + "true": [ + HumanTask( + name="human", + task_reference_name="human" + ) + ], + }, + default_case=[ + TerminateTask( + name="terminate", + task_reference_name="terminate", + input_parameters=TerminateTaskInputParameters( + termination_status=WorkflowStatus.FAILED + )) + ], + input_parameters=DecisionCaseValueTaskInputParameters( + case_value_param="${workflow.input.status}" + ), +)) +``` ++++ + +### DO_WHILE TASK + ++++ Default +```python +loop_tasks = WaitDurationTask( + name="wait", + task_reference_name="wait", + input_parameters=WaitDurationTaskInputParameters( + duration="1 seconds" + ) +) + +self.tasks.append(DoWhileTask( + name="do_while", + task_reference_name="LoopTask", + loop_condition="if ( $.LoopTask['iteration'] < $.value ) { true; } else { false; }", + loop_over=[ + loop_tasks + ], + input_parameters={ + "value": workflow_inputs.value.wf_input + } +)) +``` ++++ + +### DYNAMIC_FORK TASK + ++++ Array task + +```python +task_inputs = InventoryWorkflows.InstallDeviceByName.WorkflowInput() + +fork_inputs = [ + { + task_inputs.device_name.name: "IOS01" + }, + { + task_inputs.device_name.name: "IOS02" + }, + { + task_inputs.device_name.name: "IOS02" + } +] + +self.tasks.append(DynamicForkTask( + name="dyn_fork", + task_reference_name="dyn_fork", + input_parameters=DynamicForkArraysTaskFromDefInputParameters( + fork_task_name=InventoryWorkflows.InstallDeviceByName, + fork_task_inputs=fork_inputs + ), +)) + +self.tasks.append(JoinTask( + name="join", + task_reference_name="join" +)) +``` + ++++ Task input 1 + +```python +self.tasks.append(DynamicForkTask( + name="dyn_fork", + task_reference_name="dyn_fork", + input_parameters=DynamicForkTaskFromDefInputParameters( + dynamic_tasks=InventoryWorkflows.InstallDeviceByName, + dynamic_tasks_input=workflow_inputs.device_name.wf_input + ), +)) + +self.tasks.append(JoinTask( + name="join", + task_reference_name="join" +)) + +``` + ++++ Task input 2 + +```python +task_inputs = InventoryWorkflows.InstallDeviceByName.WorkflowInput() + +fork_inputs = [ + { + task_inputs.device_name.name: "IOS01" + }, + { + task_inputs.device_name.name: "IOS02" + }, + { + task_inputs.device_name.name: "IOS02" + } +] + +input_parameters = DynamicForkArraysTaskInputParameters( + fork_task_name="Install_device_by_name", + fork_task_inputs=fork_inputs +) + +self.tasks.append(DynamicForkTask( + name="dyn_fork", + task_reference_name="dyn_fork", + input_parameters=input_parameters +)) +``` + ++++ Task input 3 + +```python +self.tasks.append(DynamicForkTask( + name="dyn_fork", + task_reference_name="dyn_fork", + input_parameters=DynamicForkTaskInputParameters( + dynamic_tasks_input="Install_device_by_name", + dynamic_tasks=[ + { + task_inputs.device_name.name: "IOS01" + }, + { + task_inputs.device_name.name: "IOS02" + }, + { + task_inputs.device_name.name: "IOS02" + } + ] + ) +)) +``` ++++ + +### EVENT TASK + ++++ Default +```python +self.tasks.append(EventTask( + name="Event", + task_reference_name="event_a", + sink="conductor:Wait_task", + async_complete=False +)) +``` ++++ + +### EXCLUSIVE_JOIN TASK + ++++ Default +```python +self.tasks.append(ExclusiveJoinTask( + name="exclusive_join", + task_reference_name="exclusive_join", +)) + +``` ++++ Join On + +A list of task reference names that this JOIN task will wait for completion + +```python +self.tasks.append(ExclusiveJoinTask( + name="exclusive_join", + task_reference_name="exclusive_join", + join_on=["task1", "task2"] +)) + +``` ++++ + +### FORK_JOIN TASK + ++++ Default +```python +fork_tasks_a = [] +fork_tasks_b = [] + +fork_tasks_a.append(SimpleTask( + name=Inventory.InventoryAddDevice, + task_reference_name="add_device_cli", + input_parameters=SimpleTaskInputParameters( + root=dict( + device_name="IOS01", + zone="uniconfig", + service_state="IN_SERVICE", + mount_body="body" + ) + ) +)) + +fork_tasks_a.append(SimpleTask( + name=Inventory.InventoryInstallDeviceByName, + task_reference_name="install_device_cli", + input_parameters=SimpleTaskInputParameters( + root=dict( + device_name="IOS01" + ) + ) +)) + +fork_tasks_b.append(SimpleTask( + name=Inventory.InventoryAddDevice, + task_reference_name="add_device_netconf", + input_parameters=SimpleTaskInputParameters( + root=dict( + device_name="NTF01", + zone="uniconfig", + service_state="IN_SERVICE", + mount_body="body" + ) + ) +)) + +fork_tasks_b.append(SimpleTask( + name=Inventory.InventoryInstallDeviceByName, + task_reference_name="install_device_netconf", + input_parameters=SimpleTaskInputParameters( + root=dict( + device_name="NTF01" + ) + ) +)) + +self.tasks.append(ForkTask( + name="fork", + task_reference_name="fork", + fork_tasks=[ + fork_tasks_a, + fork_tasks_b + ] +)) + +``` ++++ + +### HUMAN TASK + ++++ Default +```python +self.tasks.append(HumanTask( + name="human", + task_reference_name="human" +)) +``` ++++ + +### INLINE TASK + ++++ Default +```python +self.tasks.append(InlineTask( + name="inline", + task_reference_name="inline", + input_parameters=InlineTaskInputParameters( + expression='if ($.value){return {"result": true}} else { return {"result": false}}', + value="${workflow.variables.test}" + ))) + +``` + +INFO: expression wrapped into javascript function:
+ +```javascript +expression = "function e() { if ($.value){return {\"result\": true}} else { return {\"result\": false}} } e();" +``` ++++ + +### JOIN TASK + +Read more on [conductor-oss docs](https://docs.conductor-oss.org/documentation/configuration/workflowdef/operators/join-task.html) + ++++ Default +```python +self.tasks.append(JoinTask( + name="join", + task_reference_name="join" +)) +``` ++++ Join On + +A list of task reference names that this JOIN task will wait for completion + +```python +self.tasks.append(JoinTask( + name="join", + task_reference_name="join", + join_on=["task1", "task2"] +)) +``` ++++ + +### JSON_JQ_TRANSFORM TASK + ++++ Default +```python +json_jq = JsonJqTask( + name="json_jq", + task_reference_name="json_jq", + input_parameters=JsonJqTaskInputParameters( + query_expression="{ key3: (.key1.value1 + .key2.value2) }", + key_1={ + "value1": [ + "a", + "b" + ] + }, + key2={ + "value2": [ + "c", + "d" + ] + } + ) +) +self.tasks.append(json_jq) +``` ++++ + +### SET_VARIABLE TASK + ++++ Default +```python +self.tasks.append(SetVariableTask( + name="var", + task_reference_name="var", + input_parameters=SetVariableTaskInputParameters( + root=dict( + env="frinx" + ) + ) +)) +``` ++++ + +### SIMPLE TASK + ++++ Default +```python +self.tasks.append( + SimpleTask( + name=Inventory.InventoryAddDevice, + task_reference_name="test", + input_parameters=SimpleTaskInputParameters( + root=dict( + device_name="IOS01", + zone="uniconfig", + service_state="aha", + mount_body="body" + ) + ) + ) +) +``` ++++ + +### START_WORKFLOW TASK + +Start Workflow is an operator task used to start another workflow from an existing workflow. Unlike a sub-workflow task, +a start workflow task doesn’t create a relationship between the current workflow and the newly started workflow. That +means it doesn’t wait for the started workflow to get completed. + +#### INPUT PARAMETERS + +**start_workflow**: + +* StartWorkflowTaskInputParameters : StartWorkflowTaskPlainInputParameters|StartWorkflowTaskFromDefInputParameters +* StartWorkflowTaskPlainInputParameters +* StartWorkflowTaskFromDefInputParameters + ++++ Default +```python +workflow_input_parameters = { + InventoryWorkflows.InstallDeviceByName.WorkflowInput().device_name.name: "IOS01" +} + +task_inputs = StartWorkflowTaskInputParameters( + start_workflow=StartWorkflowTaskFromDefInputParameters( + workflow=InventoryWorkflows.InstallDeviceByName, + input=workflow_input_parameters + ) +) + +self.tasks.append(StartWorkflowTask( + name="Install_device_by_name", + task_reference_name="start", + input_parameters=task_inputs +)) +``` ++++ + +### SUBWORKFLOW TASK + ++++ Default +```python +sub_workflow_param = SubWorkflowParam( + name=InventoryWorkflows.AddDeviceToInventory.__name__, + version=1 +) + +workflows_inputs = InventoryWorkflows.AddDeviceToInventory.WorkflowInput() + +sub_workflow_input = {} +sub_workflow_input.setdefault(workflows_inputs.device_name.name, "IOS01") +sub_workflow_input.setdefault(workflows_inputs.zone.name, "uniconfig") + +self.tasks.append(SubWorkflowTask( + name="subworkflow", + task_reference_name="subworkflow", + sub_workflow_param=sub_workflow_param, + input_parameters=SubWorkflowInputParameters( + root=sub_workflow_input + ) +)) +``` + ++++ SubWorkflowFromDefParam + +SubWorkflowFromDefParam validate subworkflow and workflow inputs + +```python +sub_workflow_param = SubWorkflowFromDefParam( + name=InventoryWorkflows.AddDeviceToInventory +) + +workflows_inputs = InventoryWorkflows.AddDeviceToInventory.WorkflowInput() + +sub_workflow_input = {} +sub_workflow_input.setdefault(workflows_inputs.device_name.name, "IOS01") +sub_workflow_input.setdefault(workflows_inputs.zone.name, "uniconfig") + +self.tasks.append(SubWorkflowTask( + name="subworkflow", + task_reference_name="subworkflow", + sub_workflow_param=sub_workflow_param, + input_parameters=SubWorkflowInputParameters( + root=sub_workflow_input + ) +)) +``` ++++ + +### SWITCH TASK + ++++ INPUT PARAMETERS + +* SwitchTaskValueParamInputParameters -> VALUE-PARAM +* SwitchTaskInputParameters -> JAVASCRIPT + +VALUE-PARAM evaluator type + +```python +switch = SwitchTask( + name="switch", + task_reference_name="switch", + decision_cases={ + "true": [ + WaitDurationTask( + name="wait", + task_reference_name="wait1", + input_parameters=WaitDurationTaskInputParameters( + duration="10 seconds" + ) + ) + ]}, + default_case=[ + WaitDurationTask( + name="wait", + task_reference_name="wait2", + input_parameters=WaitDurationTaskInputParameters( + duration="10 seconds" + ) + ) + ], + expression="switch_case_value", + evaluator_type=SwitchEvaluatorType.VALUE_PARAM, + input_parameters=SwitchTaskValueParamInputParameters( + switch_case_value="${workflow.input.value}" + ) +) +self.tasks.append(switch) +``` ++++ JAVASCRIPT evaluator type + +```python +switch = SwitchTask( + name="switch", + task_reference_name="switch", + decision_cases={ + "true": [ + WaitDurationTask( + name="wait", + task_reference_name="wait1", + input_parameters=WaitDurationTaskInputParameters( + duration="10 seconds" + ) + ) + ]}, + default_case=[ + WaitDurationTask( + name="wait", + task_reference_name="wait2", + input_parameters=WaitDurationTaskInputParameters( + duration="10 seconds" + ) + ) + ], + expression="$.inputValue == 'true' ? 'true' : 'false'", + evaluator_type=SwitchEvaluatorType.JAVASCRIPT, + input_parameters=SwitchTaskInputParameters( + input_value="${workflow.input.value}" + ) +) + +self.tasks.append(switch) +``` ++++ + +### TERMINATE TASK + ++++ Default +```python +TerminateTask( + name="terminate", + task_reference_name="terminate", + input_parameters=TerminateTaskInputParameters( + termination_status=WorkflowStatus.COMPLETED, + workflow_output={"output": "COMPLETED"} + ) +) +``` ++++ + +### WAIT_DURATION TASK + ++++ Default +```python +self.tasks.append(WaitDurationTask( + name="WAIT", + task_reference_name="WAIT", + input_parameters=WaitDurationTaskInputParameters( + duration="10 seconds" + ) +)) +``` ++++ + +### WAIT_UNTIL TASK + ++++ Default +```python +self.tasks.append(WaitUntilTask( + name="WAIT_UNTIL", + task_reference_name="WAIT_UNTIL", + input_parameters=WaitUntilTaskInputParameters( + until='2022-12-25 09:00 PST' + ) +)) +``` ++++ \ No newline at end of file diff --git a/frinx-workflow-manager/workflow-builder/readme.md b/frinx-workflow-manager/workflow-builder/readme.md index 8f9fe284d..9555eeb9c 100644 --- a/frinx-workflow-manager/workflow-builder/readme.md +++ b/frinx-workflow-manager/workflow-builder/readme.md @@ -1,4 +1,4 @@ -# Workflow Builder +# UI Workflow Builder Workflow Builder is the graphical interface for Workflow Manager and is used to create, modify and manage workflows.