You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello teams,
Mistral AI just released their two new model, pixtral-large (multi-modal) for pdf with image analysis and mistral-large-v2 for their new flagship model. At the same time they upgraded (a lot) their chat with new function like canvas, that I just discovered.
This is clearly one of the missing feature I was feeling with anything LLM.
Typically when you develop for example a python module the code is spread over several files and you work with your preferred LLM helping you to fill this part of code or this other part.
For example I want to develop an odoo module, with a canvas it looks like this:
What is cool with this feature is you can work with your LLM on this "temporary buffer", all the work you do on your part can be easily followed by the LLM and you are always in sync.
I understand this can be a lot of work but I found this: https://github.com/langchain-ai/open-canvas
It is developed in typescript so "probably" (you are the experts!) it can be probably doable and give anything LLM a very cool feature :)
Just proposing this "Big feature" if that can be included in future plan?
Thanks for your time!
The text was updated successfully, but these errors were encountered:
What would you like to see?
Hello teams,
Mistral AI just released their two new model, pixtral-large (multi-modal) for pdf with image analysis and mistral-large-v2 for their new flagship model. At the same time they upgraded (a lot) their chat with new function like canvas, that I just discovered.
This is clearly one of the missing feature I was feeling with anything LLM.
Typically when you develop for example a python module the code is spread over several files and you work with your preferred LLM helping you to fill this part of code or this other part.
For example I want to develop an odoo module, with a canvas it looks like this:
What is cool with this feature is you can work with your LLM on this "temporary buffer", all the work you do on your part can be easily followed by the LLM and you are always in sync.
I understand this can be a lot of work but I found this:
https://github.com/langchain-ai/open-canvas
It is developed in typescript so "probably" (you are the experts!) it can be probably doable and give anything LLM a very cool feature :)
Just proposing this "Big feature" if that can be included in future plan?
Thanks for your time!
The text was updated successfully, but these errors were encountered: