Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for 2 new AI commands #76

Merged
merged 3 commits into from
Aug 30, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
update
bartekpacia committed Aug 28, 2024
commit de4c505d1171f8664cdab54497ec41d1409fdfbb
39 changes: 4 additions & 35 deletions api-reference/commands/assertnodefectswithai.md
Original file line number Diff line number Diff line change
@@ -15,41 +15,6 @@ model if it sees any obvious defects in the provided screenshot. Common defects
include text and UI elements being cut off, overlapping, or not being centered
within their containers.

### Configuration

#### Model

The default model is the latest GPT-4o.

You can configure the model to use with the `MAESTRO_CLI_AI_MODEL` env var, for example:

```console
export MAESTRO_CLI_AI_MODEL=claude-3-5-sonnet-20240620
```

Currently supported:
– GPT family of models from OpenAI
- Claude family of models from Anthropic

Support for more models and providers is tracked [in this issue](https://github.com/mobile-dev-inc/maestro/issues/1957).

#### API key

To use this command, an API key for the LLM service is required. To set it, export the
`MAESTRO_CLI_AI_KEY` env var.

For example, to set the key for OpenAI:

```console
export MAESTRO_CLI_AI_KEY=sk-4NXxdLXY4H9DZW0Vpf4lT3HuBaFJoz1zoL21eLoLRKlyXd69
```

or for Anthropic:

```console
export MAESTRO_CLI_AI_KEY=sk-ant-api03-U9vWi8GDrxRAvA2RL2RMCImYCQr8BFCbNOq2woeRXLNz2Iy4PbY1X2137leSm92mitI7F9IwxKIrXtXgTIzj7A-2AvgbwAA
```

### Output

Output is generated in HTML and JSON formats in the folder for the individual
@@ -64,3 +29,7 @@ test run:
│   ├── ai-report-(My first flow).html
│   ├── ai-report-(My second flow).html
```

{% content-ref url="ai-configuration.md" %}
[ai-configuration.md](ai-configuration.md)
{% endcontent-ref %}
4 changes: 4 additions & 0 deletions api-reference/commands/assertwithai.md
Original file line number Diff line number Diff line change
@@ -42,3 +42,7 @@ test run:
│   ├── ai-report-(My first flow).html
│   ├── ai-report-(My second flow).html
```

{% content-ref url="ai-configuration.md" %}
[ai-configuration.md](ai-configuration.md)
{% endcontent-ref %}
38 changes: 38 additions & 0 deletions api-reference/configuration/ai-configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# AI features configuration

Some commands, such as `assertWithAI` and `assertNoDiffWithAI`, use generative
AI models, which are not built directly in Maestro CLI. Therefore, to use such
commands, additional configuration is required.

### Model

The default model is the latest GPT-4o.

You can configure the model to use with the `MAESTRO_CLI_AI_MODEL` env var, for example:

```console
export MAESTRO_CLI_AI_MODEL=claude-3-5-sonnet-20240620
```

Currently supported:
– GPT family of models from OpenAI
- Claude family of models from Anthropic

Support for more models and providers is tracked [in this issue](https://github.com/mobile-dev-inc/maestro/issues/1957).

### API key

To use this command, an API key for the LLM service is required. To set it, export the
`MAESTRO_CLI_AI_KEY` env var.

For example, to set the key for OpenAI:

```console
export MAESTRO_CLI_AI_KEY=sk-4NXxdLXY4H9DZW0Vpf4lT3HuBaFJoz1zoL21eLoLRKlyXd69
```

or for Anthropic:

```console
export MAESTRO_CLI_AI_KEY=sk-ant-api03-U9vWi8GDrxRAvA2RL2RMCImYCQr8BFCbNOq2woeRXLNz2Iy4PbY1X2137leSm92mitI7F9IwxKIrXtXgTIzj7A-2AvgbwAA
```
3 changes: 1 addition & 2 deletions api-reference/configuration/flow-configuration.md
Original file line number Diff line number Diff line change
@@ -9,8 +9,7 @@ The following properties can be configured on a given Flow:
* `onFlowStart`: This is a hook that takes a list of Maestro commands as an argument. These commands will be executed before the initiation of each flow. Typically, this hook is used to run various setup scripts.
* `onFlowComplete`: This hook accepts a list of Maestro commands that are executed upon the completion of each flow. It's important to note that these commands will run regardless of whether a particular flow has ended successfully or has encountered a failure. Typically, this hook is used to run various teardown / cleanup scripts.

```yaml
# flow.yaml
```yaml title="flow.yaml"
appId: my.app
name: Custom Flow name
tags: