Skip to content

Commit

Permalink
Add source links (#457)
Browse files Browse the repository at this point in the history
  • Loading branch information
harishmohanraj authored Oct 21, 2024
1 parent 0b3e46e commit ec825b1
Showing 1 changed file with 29 additions and 12 deletions.
41 changes: 29 additions & 12 deletions docs/docs/en/user-guide/runtimes/autogen/using_non_openai_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,18 @@

FastAgency makes it simple to work with **non-OpenAI models** through AutoGen's runtime. You can do this in a couple of ways:

- via **proxy servers that provide an OpenAI-compatible API**
- by **using a custom model client class**, which lets you define and load your own models, as explained [here](https://microsoft.github.io/autogen/0.2/docs/topics/non-openai-models/about-using-nonopenai-models/#custom-model-client-class){target="_blank"}.
- via [proxy servers that provide an OpenAI-compatible API](https://microsoft.github.io/autogen/0.2/docs/topics/non-openai-models/about-using-nonopenai-models/#openai-compatible-api-proxy-server){target="_blank"}
- by [using a custom model client class](https://microsoft.github.io/autogen/0.2/docs/topics/non-openai-models/about-using-nonopenai-models/#custom-model-client-class){target="_blank"}, which lets you define and load your own models.

This flexibility allows you to **access a variety of models**, assign **tailored models to agents**, and **optimise inference costs**, among other advantages. For more details, check out AutoGen's documentation on running **non-OpenAI models** <a href="https://microsoft.github.io/autogen/0.2/docs/topics/non-openai-models/about-using-nonopenai-models" target="_blank" >here</a>.
This flexibility allows you to **access a variety of models**, assign **tailored models to agents**, and **optimise inference costs**, among other advantages.

To show how simple it is to use **non-OpenAI models**, we’ll **rewrite** the [Weatherman chatbot](./index.md#example-integrating-a-weather-api-with-autogen) example from earlier, making just a **few changes** to switch to **Together AI** Cloud with the **Meta-Llama-3.1-70B-Instruct-Turbo** model. Let’s dive in!
To show how simple it is to use **non-OpenAI models**, we'll **rewrite** the [Weatherman chatbot](./index.md#example-integrating-a-weather-api-with-autogen) example. With just a **few changes**, we'll switch to the [Together AI](https://www.together.ai){target="_blank"} Cloud platform, utilizing their **Meta-Llama-3.1-70B-Instruct-Turbo** model. For a comprehensive list of models available through Together AI, please refer to their official [documentation](https://docs.together.ai/docs/chat-models){target="_blank"}.

Let’s dive in!

## Installation

Before getting started, make sure you have installed FastAgency with **autogen and openapi submodules** by running the following command:
Before getting started, make sure you have installed FastAgency with **[autogen](../../../api/fastagency/runtimes/autogen/autogen/AutoGenWorkflows.md) and [openapi](../../../api/fastagency/api/openapi/OpenAPI.md) submodules** by running the following command:

```bash
pip install "fastagency[autogen,openapi]"
Expand All @@ -23,22 +25,22 @@ This installation includes the AutoGen runtime, allowing you to build multi-agen

Before you begin this guide, ensure you have:

- **Together AI account and API Key**: To create a Together AI account and obtain your API key, follow the steps in the section below.
- **Together AI account and API Key**: To create a [Together AI](https://www.together.ai){target="_blank"} account and obtain your API key, follow the steps in the section below.

### Setting Up Your Together AI Account and API Key

**1. Create a Together AI account:**

- Go to <a href="https://api.together.ai" target="_blank">https://api.together.ai</a>.
- Click on one of the options to Sign in and follow the instructions to create your account.
- Choose a sign-in option and follow the instructions to create your account.
- If you already have an account, simply log in.

**2. Obtain your API Key:**

- Once you complete the account creation process the API key will be displayed on the screen which you can copy.
- Or you can do the following to view your API key:
- Tap on the person icon at the top right corner, and click **Settings**
- On the left side bar, navigate to **API Keys**
- Tap on the person icon at the top right corner, and click [Settings](https://api.together.ai/settings/profile){target="_blank"}
- On the left side bar, navigate to [API Keys](https://api.together.ai/settings/api-keys){target="_blank"}
- **Copy your API key**, and you're ready to go!

#### Set Up Your API Keys in the Environment
Expand Down Expand Up @@ -69,11 +71,26 @@ Since the modifications are minor, **I will focus only on these differences in t

#### 1. Configure the Language Model (LLM)

The first and most important change is to update the LLM configuration **to use non-OpenAI models**. In this example, we’ll use **meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo**, but you can choose any models offered by Together AI Cloud. Additionally, we need to add two more parameters: **api_type** and **hide_tools**.
First, update the LLM configuration to use **non-OpenAI models**. For our example, we'll use **meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo**, but you can choose any model from [Together AI](https://www.together.ai){target="_blank"} Cloud. For a complete list, refer to their official [documentation](https://docs.together.ai/docs/chat-models){target="_blank"}.


Next, add two parameters: `api_type` and `hide_tools`.

- `hide_tools`

The [hide_tools](https://microsoft.github.io/autogen/0.2/docs/topics/non-openai-models/local-ollama#reducing-repetitive-tool-calls){target="_blank"} in AutoGen controls when tools are visible during LLM conversations. It addresses a common issue where LLMs might **repeatedly recommend tool calls**, even after they've been executed, potentially creating an **endless loop** of tool invocations.

This parameter offers three options to control tool visibility:

1. `never`: Tools are always visible to the LLM
2. `if_all_run`: Tools are hidden once all the tools have been called
3. `if_any_run`: Tools are hidden after any of the tool has been called

In our example, we set the `hide_tools` to `if_any_run`, to hide tools once any of them has been called, improving conversation flow.

The **hide_tools** parameter is particularly useful, as it prevents tools from appearing in the Together AI response creation call if they have already been executed. This helps minimize the chances of the LLM choosing a tool when it's unnecessary.
- `api_type`

Here we are setting `hide_tools` to `if_any_run`, indicating that we want to hide the tools if any of the tools have already been run.
Set the `api_type` to `together` to instruct FastAgency to use Together AI Cloud for model inference.

```python
{! docs_src/user_guide/runtimes/autogen/using_non_openai_models.py [ln:12-22] !}
Expand Down

0 comments on commit ec825b1

Please sign in to comment.