-
-
Notifications
You must be signed in to change notification settings - Fork 745
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
examples: Add example readmes (#916)
* examples: Add READMEs to each example
- Loading branch information
Showing
72 changed files
with
2,576 additions
and
44 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,41 @@ | ||
# 🎉 Examples | ||
# 🎉 LangChain.go Examples | ||
|
||
This directory tree contains examples that are independently buildable and runnable. | ||
This directory contains a collection of example projects demonstrating various features and integrations of LangChain.go, a powerful library for building applications with large language models (LLMs). | ||
|
||
## Contents | ||
|
||
The examples cover a wide range of topics, including: | ||
|
||
- LLM integrations (OpenAI, Anthropic, Google AI, Ollama, etc.) | ||
- Chains and agents | ||
- Vector stores and embeddings | ||
- Prompt engineering | ||
- Memory and conversation management | ||
- Database integrations | ||
- And more! | ||
|
||
Each example is contained in its own subdirectory with a descriptive name, making it easy to find and explore specific use cases. | ||
|
||
## Running the Examples | ||
|
||
To run an example: | ||
|
||
1. Navigate to the desired example directory | ||
2. Ensure you have the necessary dependencies installed (usually by running `go mod tidy`) | ||
3. Set any required environment variables (e.g., API keys) | ||
4. Run the example with `go run .` | ||
|
||
## Key Examples | ||
|
||
Some notable examples include: | ||
|
||
- `openai-chat-example`: Demonstrates basic chat functionality with OpenAI's GPT models | ||
- `mrkl-agent-example`: Shows how to create an agent that can use tools to solve complex tasks | ||
- `chroma-vectorstore-example`: Illustrates using Chroma as a vector store for similarity search | ||
- `sql-database-chain-example`: Showcases querying SQL databases using natural language | ||
|
||
## Contributing | ||
|
||
Feel free to contribute your own examples or improvements to existing ones! Please follow the established structure and include clear documentation. | ||
|
||
Happy exploring and building with LangChain.go! 🚀 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
# Anthropic Completion Example | ||
|
||
Hello there, fellow Go enthusiasts and AI adventurers! 👋 Welcome to this exciting example of using the Anthropic API with Go! | ||
|
||
## What's in this directory? | ||
|
||
This directory contains a simple yet powerful example of how to use the Anthropic API to generate text completions using Go. Here's what you'll find: | ||
|
||
1. `anthropic_completion_example.go`: This is the main Go file that demonstrates how to use the Anthropic API. It's a great starting point for your AI-powered adventures! | ||
|
||
## What does the code do? | ||
|
||
The `anthropic_completion_example.go` file showcases how to: | ||
|
||
- Initialize an Anthropic LLM (Language Model) client | ||
- Generate text completions using the Claude 3 Opus model | ||
- Stream the generated text in real-time | ||
|
||
It even includes a fun prompt asking Claude to write a poem about Golang-powered AI systems! 🤖📝 | ||
|
||
## How to use this example | ||
|
||
1. Make sure you have Go installed on your system. | ||
2. Set up your Anthropic API key as an environment variable. | ||
3. Run the example using `go run anthropic_completion_example.go`. | ||
4. Watch as the AI-generated poem streams to your console! | ||
|
||
## Dependencies | ||
|
||
This project uses the fantastic `langchaingo` library to interact with the Anthropic API. It's a great tool for building AI-powered applications in Go! | ||
|
||
## What to expect | ||
|
||
When you run the example, you'll see a poem about Golang-powered AI systems being generated and printed to your console in real-time. It's like watching an AI poet at work! 🎭 | ||
|
||
## Have fun! | ||
|
||
We hope this example inspires you to create amazing AI-powered applications using Go and Anthropic's powerful language models. Happy coding! 🚀🎉 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
# Anthropic Tool Call Example 🛠️🤖 | ||
|
||
Welcome to the Anthropic Tool Call Example! This fun little project demonstrates how to use the Anthropic API with tool calling capabilities in Go. It's a great way to see how AI models can interact with external tools and functions! | ||
|
||
## What's Inside? 📦 | ||
|
||
This directory contains two main files: | ||
|
||
1. `anthropic-tool-call-example.go`: The star of the show! 🌟 This Go file contains a complete example of how to: | ||
- Set up an Anthropic LLM client | ||
- Define available tools (in this case, a weather function) | ||
- Send queries to the model | ||
- Handle tool calls and responses | ||
- Maintain a conversation history | ||
|
||
2. `go.mod`: The module definition file for this project. It lists the required dependencies, including the awesome `# Anthropic Tool Call Example 🛠️🤖 | ||
|
||
Welcome to the Anthropic Tool Call Example! This fun little project demonstrates how to use the Anthropic API with tool calling capabilities in Go. It's a great way to see how AI models can interact with external tools and functions! | ||
|
||
## What's Inside? 📦 | ||
|
||
This directory contains a main Go file: | ||
|
||
`anthropic-tool-call-example.go`: The star of the show! 🌟 This Go file contains a complete example of how to: | ||
- Set up an Anthropic LLM client | ||
- Define available tools (in this case, a weather function) | ||
- Send queries to the model | ||
- Handle tool calls and responses | ||
- Maintain a conversation history | ||
|
||
## What Does It Do? 🤔 | ||
|
||
This example showcases a conversation with an AI model about the weather in different cities. Here's what happens: | ||
|
||
1. It sets up an Anthropic LLM client using the Claude 3 Haiku model. | ||
2. Defines a `getCurrentWeather` function as an available tool. | ||
3. Sends an initial query about the weather in Boston. | ||
4. The AI model calls the weather function to get information. | ||
5. The program executes the tool call and sends the result back to the model. | ||
6. The conversation continues with questions about weather in Chicago. | ||
7. The program demonstrates how to maintain context and use tool calls throughout a multi-turn conversation. | ||
|
||
## Cool Features 😎 | ||
|
||
- **Tool Calling**: Shows how to define and use external tools with the AI model. | ||
- **Conversation History**: Demonstrates maintaining context across multiple interactions. | ||
- **Error Handling**: Includes proper error checking and logging. | ||
- **Flexible Weather Info**: Uses a simple map to simulate weather data for different cities. | ||
|
||
## How to Run 🏃♂️ | ||
|
||
1. Make sure you have Go installed on your system. | ||
2. Set up your Anthropic API key as an environment variable. | ||
3. Run the example with: `go run anthropic-tool-call-example.go` | ||
|
||
Enjoy exploring the world of AI and tool calling with this fun example! 🎉🤖🌦️ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# Bedrock Claude 3 Vision Example | ||
|
||
Hello there! 👋 This example demonstrates how to use the Anthropic Claude 3 Haiku model with AWS Bedrock for image analysis using Go and the LangChain Go library. Let's break down what this exciting code does! | ||
|
||
## What This Example Does | ||
|
||
1. **Sets Up AWS Bedrock**: The code initializes an AWS Bedrock client to interact with the Claude 3 Haiku model. Make sure you have the necessary permissions set up in your AWS account! | ||
|
||
2. **Loads an Image**: An image file (`image.png`) is embedded into the binary using Go's `embed` package. This image will be analyzed by the AI model. | ||
|
||
3. **Sends a Request**: The code constructs a request to the Claude 3 model, including: | ||
- The image data (in PNG format) | ||
- A text prompt asking to identify the string on a box in the image | ||
|
||
4. **Processes the Response**: After sending the request, the code handles the response from the AI model, extracting the generated content and some metadata about token usage. | ||
|
||
5. **Outputs Results**: Finally, it prints out the AI's interpretation of what string is on the box in the image. | ||
|
||
## Key Features | ||
|
||
- **Multimodal AI**: This example showcases how to work with both image and text inputs in a single AI request. | ||
- **AWS Integration**: Demonstrates integration with AWS Bedrock for accessing powerful AI models. | ||
- **Error Handling**: Includes basic error checking to ensure the process runs smoothly. | ||
- **Token Usage Tracking**: Logs the number of input and output tokens used, which can be helpful for monitoring usage and costs. | ||
|
||
## Running the Example | ||
|
||
To run this example, you'll need: | ||
1. An AWS account with access to Bedrock and the Claude 3 Haiku model | ||
2. Proper AWS credentials set up on your machine | ||
3. The required Go dependencies installed | ||
|
||
Once everything is set up, simply run the Go file, and it should output the AI's interpretation of the text on the box in the image! | ||
|
||
Happy coding, and enjoy exploring the fascinating world of multimodal AI with Claude 3 and AWS Bedrock! 🚀🖼️🤖 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
# Caching LLM Example | ||
|
||
This example demonstrates how to implement caching for a Language Model (LLM) using the LangChain Go library. The program showcases the benefits of caching by repeatedly querying an LLM and measuring the response time. | ||
|
||
## What This Example Does | ||
|
||
1. **Sets up an LLM**: | ||
- Initializes an Ollama LLM using the "llama2" model. | ||
|
||
2. **Implements Caching**: | ||
- Creates an in-memory cache that stores results for one minute. | ||
- Wraps the base LLM with the caching functionality. | ||
|
||
3. **Performs Repeated Queries**: | ||
- Asks the same question ("Who was the first man to walk on the moon?") three times. | ||
- The first query will use the actual LLM, while subsequent queries will retrieve the cached response. | ||
|
||
4. **Measures and Displays Performance**: | ||
- Records the time taken for each query. | ||
- Prints the response along with the time taken for each iteration. | ||
|
||
5. **Formats Output**: | ||
- Uses word wrapping to ensure neat output within an 80-character width. | ||
- Separates each iteration with a line of "=" characters. | ||
|
||
## Key Features | ||
|
||
- **LLM Caching**: Demonstrates how to implement caching to improve response times for repeated queries. | ||
- **Performance Measurement**: Shows the time difference between cached and non-cached responses. | ||
- **Ollama Integration**: Uses the Ollama LLM with the "llama2" model. | ||
- **Output Formatting**: Ensures readable output with proper word wrapping and separation between iterations. | ||
|
||
## Running the Example | ||
|
||
When you run this example, you'll see the LLM's response to the question about the first man on the moon, repeated three times. The first response will likely take longer as it queries the actual LLM, while the subsequent responses should be significantly faster due to caching. | ||
|
||
This example is great for understanding how caching can dramatically improve response times in applications that use LLMs, especially when similar queries are likely to be repeated. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# Conversational Memory with SQLite in LangChain | ||
|
||
Hello there! 👋 This example demonstrates how to create a conversational AI system with memory persistence using SQLite in Go with the LangChain library. Let's break down what this exciting code does! | ||
|
||
## What Does This Example Do? | ||
|
||
1. **Sets up an OpenAI Language Model**: It initializes an OpenAI language model to power our conversational AI. | ||
|
||
2. **Creates a SQLite Database**: The code sets up a SQLite database to store conversation history. | ||
|
||
3. **Implements Conversation Memory**: It uses SQLite to maintain a persistent memory of the conversation, allowing the AI to remember previous interactions. | ||
|
||
4. **Prepares Sample Data**: If the database is empty, it inserts a sample message to kickstart the conversation. | ||
|
||
5. **Runs a Conversation**: The example runs a conversation chain, asking the AI a question that requires memory of previous interactions. | ||
|
||
## Key Components | ||
|
||
- **SQLite Chat Message History**: Uses `sqlite3.NewSqliteChatMessageHistory` to create a chat history stored in SQLite. | ||
- **Conversation Buffer**: Implements `memory.NewConversationBuffer` to manage the conversation memory. | ||
- **Conversation Chain**: Creates a `chains.NewConversation` to handle the flow of the conversation. | ||
|
||
## How It Works | ||
|
||
1. The code first checks if there's any existing data in the SQLite database. | ||
2. If empty, it inserts a sample message: "Hi there, my name is Murilo!" | ||
3. It then asks the AI: "What's my name? How many times did I ask this?" | ||
4. The AI responds based on the conversation history stored in the SQLite database. | ||
|
||
This example showcases how to create a conversational AI system with persistent memory, allowing for more context-aware and personalized interactions over time! | ||
|
||
Feel free to run this example and experiment with different questions to see how the AI remembers and uses previous conversation context! 🚀🤖 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
# Chroma Vector Store Example | ||
|
||
This example demonstrates how to use the Chroma vector store with LangChain in Go. It showcases various operations and queries on a vector store containing information about cities. | ||
|
||
## What This Example Does | ||
|
||
1. **Vector Store Creation**: The example starts by creating a new Chroma vector store using environment variables for configuration. | ||
|
||
2. **Adding Documents**: It adds a list of documents to the vector store. Each document represents a city with its name, population, and area. | ||
|
||
3. **Similarity Searches**: The example performs three different similarity searches: | ||
|
||
a. **Up to 5 Cities in Japan**: Searches for cities located in Japan, limiting the results to 5 and using a score threshold. | ||
|
||
b. **A City in South America**: Looks for a single city in South America, also using a score threshold. | ||
|
||
c. **Large Cities in South America**: Searches for large cities in South America, using filters for area and population. | ||
|
||
4. **Result Display**: Finally, it prints out the results of each search, showing the matching cities for each query. | ||
|
||
## Key Features | ||
|
||
- Demonstrates the use of the Chroma vector store in Go | ||
- Shows how to add documents with metadata to a vector store | ||
- Illustrates different types of similarity searches with various options | ||
- Showcases the use of filters in vector store queries | ||
- Provides examples of working with environment variables for configuration | ||
|
||
This example is excellent for developers looking to understand how to integrate and use vector stores in their Go applications, particularly for semantic search and similarity matching tasks. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# Cohere Completion Example | ||
|
||
Hello there! 👋 This example demonstrates how to use the Cohere language model for text completion using the LangChain Go library. Let's break down what this exciting little program does! | ||
|
||
## What Does This Example Do? | ||
|
||
1. **Sets Up the Cohere LLM**: The program initializes a Cohere language model using the `cohere.New()` function. | ||
|
||
2. **Prepares the Input**: It defines a simple input prompt: "The first man to walk on the moon". | ||
|
||
3. **Generates Completion**: Using the `llms.GenerateFromSinglePrompt()` function, it sends the input to the Cohere model and receives a completion. | ||
|
||
4. **Displays the Result**: The generated completion is printed to the console. | ||
|
||
5. **Token Counting**: As a bonus, it counts the number of tokens in both the input and output, giving you an idea of the model's verbosity. | ||
|
||
## How to Run | ||
|
||
1. Make sure you have Go installed on your system. | ||
2. Set up your Cohere API key as an environment variable (the exact name depends on the LangChain Go implementation). | ||
3. Run the program with `go run cohere_completion_example.go`. | ||
|
||
## What to Expect | ||
|
||
When you run this program, you'll see: | ||
1. The generated completion based on the input prompt about the first man on the moon. | ||
2. A token count in the format "input tokens / output tokens". | ||
|
||
This example is perfect for anyone looking to get started with using Cohere's language model in their Go projects. It's a simple yet powerful demonstration of AI-powered text generation! | ||
|
||
Happy coding! 🚀🌙 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# Cybertron Embedding Example | ||
|
||
Hello there! 👋 This example demonstrates how to use the Cybertron embedding model with LangChain in Go. It's a fun and practical way to explore document embeddings and similarity searches. Let's break down what this example does! | ||
|
||
## What Does This Example Do? | ||
|
||
This example showcases two main features: | ||
|
||
1. In-memory document similarity comparison | ||
2. Vector store integration with Weaviate | ||
|
||
### In-Memory Document Similarity | ||
|
||
The `exampleInMemory` function does the following: | ||
|
||
- Creates embeddings for three words: "tokyo", "japan", and "potato" | ||
- Calculates the cosine similarity between each pair of words | ||
- Prints out the similarity scores | ||
|
||
This helps you understand how semantically related different words are in the embedding space. | ||
|
||
### Weaviate Vector Store Integration | ||
|
||
The `exampleWeaviate` function demonstrates how to use the Cybertron embeddings with a Weaviate vector store: | ||
|
||
- Creates a Weaviate vector store using the Cybertron embedder | ||
- Adds three documents to the store: "tokyo", "japan", and "potato" | ||
- Performs a similarity search for the query "japan" | ||
- Prints out the matching results and their similarity scores | ||
|
||
This shows how you can use embeddings for more advanced document retrieval tasks. | ||
|
||
## Key Components | ||
|
||
1. **Cybertron Embedder**: The example uses the "BAAI/bge-small-en-v1.5" model to generate embeddings. This model is automatically downloaded and cached. | ||
|
||
2. **Cosine Similarity**: A custom function is implemented to calculate the similarity between embeddings. | ||
|
||
3. **Weaviate Integration**: The example shows how to set up and use a Weaviate vector store with the Cybertron embeddings. | ||
|
||
## How to Run | ||
|
||
To run this example: | ||
|
||
1. Ensure you have Go installed on your system. | ||
2. Set up the required environment variables for Weaviate (if you want to run the Weaviate example): | ||
- `WEAVIATE_SCHEME` | ||
- `WEAVIATE_HOST` | ||
3. Run the example using `go run cybertron-embedding.go` | ||
|
||
## Note | ||
|
||
The Cybertron model runs locally on your CPU, so larger models might be slow. The example uses a smaller model for better performance. | ||
|
||
Have fun exploring embeddings and semantic similarity with this example! 🚀🔍 |
Oops, something went wrong.