Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrading langchain-ibm to support LangChain v0.2 #6

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 17 additions & 12 deletions libs/ibm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,17 @@ os.environ["WATSONX_APIKEY"] = watsonx_api_key
In alternative, you can set the environment variable in your terminal.

- **Linux/macOS:** Open your terminal and execute the following command:
```bash
export WATSONX_APIKEY='your_ibm_api_key'
```
To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.

```bash
export WATSONX_APIKEY='your_ibm_api_key'
```

To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.

- **Windows:** For Command Prompt, use:
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```

### Loading the model

Expand Down Expand Up @@ -69,11 +71,11 @@ watsonx_llm = WatsonxLLM(
```

**Note:**

- You must provide a `project_id` or `space_id`. For more information refer to IBM's [documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects).
- Depending on the region of your provisioned service instance, use one of the urls described [here](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).
- You need to specify the model you want to use for inferencing through `model_id`. You can find the list of available models [here](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes).


Alternatively you can use Cloud Pak for Data credentials. For more details, refer to IBM's [documentation](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).

```python
Expand All @@ -94,7 +96,7 @@ watsonx_llm = WatsonxLLM(
Create `PromptTemplate` objects which will be responsible for creating a random question.

```python
from langchain.prompts import PromptTemplate
from langchain_core.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "
prompt = PromptTemplate.from_template(template)
Expand All @@ -103,14 +105,17 @@ prompt = PromptTemplate.from_template(template)
Provide a topic and run the LLMChain.

```python
from langchain.chains import LLMChain
from langchain_core.output_parsers import StrOutputParser

llm_chain = prompt | watsonx_llm | StrOutputParser()
topic = "dog"
llm_chain.invoke(topic)

llm_chain = LLMChain(prompt=prompt, llm=watsonx_llm)
response = llm_chain.invoke("dog")
print(response)
```

### Calling the Model Directly

To obtain completions, you can call the model directly using a string prompt.

```python
Expand Down
Loading