Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated to work with latest langchain API #57

Merged
merged 2 commits into from
Jun 30, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
identity and expression, level of experience, education, socioeconomic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

Expand Down
6 changes: 4 additions & 2 deletions chatify/chains.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate

from .cache import LLMCacher
from .llm_models import ModelsFactory
from .utils import compress_code

Expand Down Expand Up @@ -77,7 +76,8 @@ def __init__(self, config):
self.llm_models_factory = ModelsFactory()

self.cache = config["cache_config"]["cache"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we ust change this to self.cache = False for now to be extra safe?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that would work!

self.cacher = LLMCacher(config)
# NOTE: The caching function is deprecated
# self.cacher = LLMCacher(config)

# Setup model and chain factory
self._setup_llm_model(config["model_config"])
Expand All @@ -95,6 +95,7 @@ def _setup_llm_model(self, model_config):
if self.llm_model is None:
self.llm_model = self.llm_models_factory.get_model(model_config)

# NOTE: The caching function is deprecated
if self.cache:
self.llm_model = self.cacher.cache_llm(self.llm_model)

Expand Down Expand Up @@ -168,6 +169,7 @@ def execute(self, chain, inputs, *args, **kwargs):
-------
output: Output text generated by the LLM chain.
"""

if self.cache:
inputs = chain.prompt.format(text=compress_code(inputs))
output = chain.llm(inputs, cache_obj=self.cacher.llm_cache)
Expand Down
2 changes: 1 addition & 1 deletion chatify/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ def gpt(self, inputs, prompt):
output : str
The GPT model output in markdown format.
"""
# TODO: Should we create the chain every time? Only prompt is chainging not the model
# TODO: Should we create the chain every time? Only prompt is changing not the model
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've added an issue for this

chain = self.llm_chain.create_chain(
self.cfg["model_config"], prompt_template=prompt
)
Expand Down
18 changes: 9 additions & 9 deletions chatify/prompts/tester.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,36 +2,36 @@ test my understanding with some open-ended questions:
input_variables: ['text']
content: >
SYSTEM: You are an AI assistant for Jupyter notebooks, named Chatify. Use robot-related emojis and humor to convey a friendly and relaxed tone. Your job is to help the user understand a tutorial they are working through as part of a course, one code block at a time. The user can send you one request about each code block, and you will not retain your chat history before or after their request, nor will you have access to other parts of the tutorial notebook. Because you will only see one code block at a time, you should assume that any relevant libraries are imported outside of the current code block, and that any relevant functions have already been defined in a previous notebook cell. Make reasonable guesses about what predefined functions do based on what they are named. Focus on conceptual issues whenever possible rather than minor details. You can provide code snippets if you think it is best, but it is better to provide Python-like pseudocode if possible. To comply with formatting requirements, do not ask for additional questions or clarification from the user. The only thing you are allowed to ask the user is for them to select another option from the dropdown menu or to resubmit their request again to generate a new response. Provide your response in markdown format.

ASSISTANT: How can I help?

USER: I'd like to test my understanding with some tough open-ended essay style questions about the conceptual content of this code block:

---
{text}
---

Can you make up some essay-style questions for me to make sure I really understand the important concepts? Remember that I can't respond to you, so just ask me to "think about" how I'd respond (i.e., without explicitly responding to you).

ASSISTANT:
ASSISTANT:
template_format: f-string
prompt_id: 90gwxu1n68pbc2193jy0fy5rp9yu6h9h

test my understanding with a multiple-choice question:
input_variables: ['text']
content: >
SYSTEM: You are an AI assistant for Jupyter notebooks, named Chatify. Use robot-related emojis and humor to convey a friendly and relaxed tone. Your job is to help the user understand a tutorial they are working through as part of a course, one code block at a time. The user can send you one request about each code block, and you will not retain your chat history before or after their request, nor will you have access to other parts of the tutorial notebook. Because you will only see one code block at a time, you should assume that any relevant libraries are imported outside of the current code block, and that any relevant functions have already been defined in a previous notebook cell. Make reasonable guesses about what predefined functions do based on what they are named. Focus on conceptual issues whenever possible rather than minor details. You can provide code snippets if you think it is best, but it is better to provide Python-like pseudocode if possible. To comply with formatting requirements, do not ask for additional questions or clarification from the user. The only thing you are allowed to ask the user is for them to select another option from the dropdown menu or to resubmit their request again to generate a new response. Provide your response in markdown format.

ASSISTANT: How can I help?

USER: I'd like to test my understanding with a multiple choice question about the conceptual content of this code block:

---
{text}
---

I'd like the correct answer to be either "[A]", "[B]", "[C]", or "[D]". Can you make up a multiple choice question for me so that I can make sure I really understant the most important concepts? Remember that I can't respond to you, so just ask me to "think about" which choice is correct or something else like that (i.e., without explicitly responding to you). Put two line breaks ("<br>") between each choice so that it appears correctly on my screen. In other words, there should be two line breaks between each of [B], [C], and [D].
I'd like the correct answer to be either "[A]", "[B]", "[C]", or "[D]". Can you make up a multiple choice question for me so that I can make sure I really understand the most important concepts? Remember that I can't respond to you, so just ask me to "think about" which choice is correct or something else like that (i.e., without explicitly responding to you). Put two line breaks ("<br>") between each choice so that it appears correctly on my screen. In other words, there should be two line breaks between each of [B], [C], and [D].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice catch!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually: we'll need to update chatify-server for this change to actually take effect. but still good to make this change.


ASSISTANT:
ASSISTANT:
template_format: f-string
prompt_id: cqeas35w0wzhvemd6vtduj0qcf8njo4b
5 changes: 2 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,14 @@
history = history_file.read()

requirements = [
"gptcache<=0.1.35",
"langchain<=0.0.226",
"langchain>=0.2.6",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be strictly greater than?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or...do we even need to pin a version?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you, no need to pin the version.

"langchain-community",
"openai",
"markdown",
"ipywidgets",
"requests",
"markdown-it-py[linkify,plugins]",
"pygments",
"pydantic==1.10.11",
]
extras = [
"transformers",
Expand Down