diff --git a/.gitbook/assets/uber_2fa.png b/.gitbook/assets/uber_2fa.png new file mode 100644 index 0000000..3435e75 Binary files /dev/null and b/.gitbook/assets/uber_2fa.png differ diff --git a/api-reference/commands/assertnodefectswithai.md b/api-reference/commands/assertnodefectswithai.md new file mode 100644 index 0000000..80751b3 --- /dev/null +++ b/api-reference/commands/assertnodefectswithai.md @@ -0,0 +1,35 @@ +# assertNoDefectsWithAI + +{% hint style="warning" %} + +This is an **experimental** feature powered by LLM technology. + +{% endhint %} + +```yaml +- assertNoDefectsWithAI +``` + +Takes a screenshot, uploads it to an LLM with a pre-made prompt, and asks the +model if it sees any obvious defects in the provided screenshot. Common defects +include text and UI elements being cut off, overlapping, or not being centered +within their containers. + +### Output + +Output is generated in HTML and JSON formats in the folder for the individual +test run: + +``` +~/.maestro +└── tests + ├── 2024-08-20_213616 + │   ├── ai-(My first flow).json + │   ├── ai-(My second flow).json + │   ├── ai-report-(My first flow).html + │   ├── ai-report-(My second flow).html +``` + +{% content-ref url="ai-configuration.md" %} +[ai-configuration.md](ai-configuration.md) +{% endcontent-ref %} diff --git a/api-reference/commands/assertwithai.md b/api-reference/commands/assertwithai.md new file mode 100644 index 0000000..f213af0 --- /dev/null +++ b/api-reference/commands/assertwithai.md @@ -0,0 +1,48 @@ +# assertWithAI + +{% hint style="warning" %} + +This is an **experimental** feature powered by LLM technology. + +{% endhint %} + +### When to use? + +{% hint style="warning" %} + +As all things generative AI, `assertWithAI` can be very helpful, but don't trust +it blindly. + +{% endhint %} + +`assertWithAI` is useful when it's hard (or even impossible) to write the +assertion using default assertion commands. + +Asserting the presence of a two-factor authentication prompt is a good example. + +
+ +```yaml +- assertWithAI: + assertion: A two-factor authentication prompt, with space for 6 digits, is visible. +``` + + +### Output + +Output is generated in HTML and JSON formats in the folder for the individual +test run: + +``` +~/.maestro +└── tests + ├── 2024-08-20_213616 + │   ├── ai-(My first flow).json + │   ├── ai-(My second flow).json + │   ├── ai-report-(My first flow).html + │   ├── ai-report-(My second flow).html +``` + +{% content-ref url="ai-configuration.md" %} +[ai-configuration.md](ai-configuration.md) +{% endcontent-ref %} diff --git a/api-reference/configuration/ai-configuration.md b/api-reference/configuration/ai-configuration.md new file mode 100644 index 0000000..7f1c842 --- /dev/null +++ b/api-reference/configuration/ai-configuration.md @@ -0,0 +1,38 @@ +# AI features configuration + +Some commands, such as `assertWithAI` and `assertNoDiffWithAI`, use generative +AI models, which are not built directly in Maestro CLI. Therefore, to use such +commands, additional configuration is required. + +### Model + +The default model is the latest GPT-4o. + +You can configure the model to use with the `MAESTRO_CLI_AI_MODEL` env var, for example: + +```console +export MAESTRO_CLI_AI_MODEL=claude-3-5-sonnet-20240620 +``` + +Currently supported: +– GPT family of models from OpenAI +- Claude family of models from Anthropic + +Support for more models and providers is tracked [in this issue](https://github.com/mobile-dev-inc/maestro/issues/1957). + +### API key + +To use this command, an API key for the LLM service is required. To set it, export the +`MAESTRO_CLI_AI_KEY` env var. + +For example, to set the key for OpenAI: + +```console +export MAESTRO_CLI_AI_KEY=sk-4NXxdLXY4H9DZW0Vpf4lT3HuBaFJoz1zoL21eLoLRKlyXd69 +``` + +or for Anthropic: + +```console +export MAESTRO_CLI_AI_KEY=sk-ant-api03-U9vWi8GDrxRAvA2RL2RMCImYCQr8BFCbNOq2woeRXLNz2Iy4PbY1X2137leSm92mitI7F9IwxKIrXtXgTIzj7A-2AvgbwAA +``` diff --git a/api-reference/configuration/flow-configuration.md b/api-reference/configuration/flow-configuration.md index 71ae6ec..7b1d8d4 100644 --- a/api-reference/configuration/flow-configuration.md +++ b/api-reference/configuration/flow-configuration.md @@ -9,8 +9,7 @@ The following properties can be configured on a given Flow: * `onFlowStart`: This is a hook that takes a list of Maestro commands as an argument. These commands will be executed before the initiation of each flow. Typically, this hook is used to run various setup scripts. * `onFlowComplete`: This hook accepts a list of Maestro commands that are executed upon the completion of each flow. It's important to note that these commands will run regardless of whether a particular flow has ended successfully or has encountered a failure. Typically, this hook is used to run various teardown / cleanup scripts. -```yaml -# flow.yaml +```yaml title="flow.yaml" appId: my.app name: Custom Flow name tags: