You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 3, 2024. It is now read-only.
π― There is still a purpose to fine-tuning: when you want to teach a new task/pattern.
For example, patterns which fine-tuning helps with:
ChatGPT: short user query => long machine answer
Email
Novel / Fiction
I think Langchain and the community has an opportunity to build tools to make dataset generation easier for fine-tuning, provide educational examples, and also provide ready-made datasets for bootstrapping production-ready applications.
Proposal
Recreate examples @daveshap made using Langchain and add results to the Hub!
@daveshap : What do you think about this idea? I've been inspired by learning from your YouTube videos recently while using Langchain. I think it would be an incredible win for the community to combine our efforts to building incredible products with LLMs!
Problem
In the LLM landscape, LangChain has support for:
There remains a gap for Fine-Tuning support, both education, tooling, and usable examples (like the Prompts in Hub).
When to use fine-tuning?
I found @daveshap 's YouTube video OpenAI Q&A: Finetuning GPT-3 vs Semantic Search - which to use, when, and why? incredibly informative, especially this comparison:
π― There is still a purpose to fine-tuning: when you want to teach a new task/pattern.
For example, patterns which fine-tuning helps with:
I think Langchain and the community has an opportunity to build tools to make dataset generation easier for fine-tuning, provide educational examples, and also provide ready-made datasets for bootstrapping production-ready applications.
Proposal
The text was updated successfully, but these errors were encountered: