-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using local LLM in Ollama #98
Conversation
Tried mistral-nemo(12B) and llama3.1(8B), mistral-nemo has better performance. |
there's an error messege coming in this branch which is like, idk why is was happening ? @jli113 can you tell me your machine specs? |
Ubuntu 24.04, 128G RAM, gpu is one 4070TS. My error was gpu space not enough, since Ollama already used all the GPU. |
@jli113 there's a service known as "ola cloud krutrim" in which they provide you computational resource, but you've to verify your indian phone no., (maybe you can try blocking the element, using ublock)... |
@xprabhudayal JSON format it is, problem solved. |
Are you talking about the final_info.json problem? Same happening with me,
in dir run_i, meanwhile where can you find the paper generated version ?
…On Tue, Sep 10, 2024, 5:50 PM Jimmy1i ***@***.***> wrote:
JSON format it is, problem solved.
image.png (view on web)
<https://github.com/user-attachments/assets/b51aca25-1424-408a-be74-a071bf434eb5>
—
Reply to this email directly, view it on GitHub
<#98 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6UFP3DEGT6PYQYNNLSNG3DZV3PZTAVCNFSM6AAAAABNRJ5ZL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBQGU2TEMBXGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jli113 how can we eliminate this error?, in the last part, success=false |
@xprabhudayal This relates to the do_idea part in launch_scientist.py, I updated it in yesterday's commits. |
im a bit disappointed by the model as it doesn't generates the PDF, so far ive reached till here; meanwhile how much progress you've made so far, @jli113 :) ? |
@xprabhudayal , see this, In the logs, there are tex outpts, but the pdf file doesn't have any contents. |
I think we have to manually filter out tex ones...
…On Thu, Sep 19, 2024, 10:04 AM Jimmy1i ***@***.***> wrote:
@xprabhudayal <https://github.com/xprabhudayal> , see this
<#98 (comment)>,
In the logs, there are tex outpts, but the pdf file doesn't have any
contents.
—
Reply to this email directly, view it on GitHub
<#98 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6UFP3E45ITWYN5PL5C3WH3ZXJH55AVCNFSM6AAAAABNRJ5ZL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJZHE2TOMRWGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jli113 its not generating coz we're using weaker model than gpt4o, |
i tried out with llama3.1:70b groq one but rate limit error... |
I'm using vasiliyeskin's codes to make weak models create a full PDF, hope it will work. |
Tried llama3.1:70b, still nothing. I have attached the logs. don't know why it is not writing to the tex file. |
@xprabhudayal It finally writes to file, not pretty but works.
|
Thanks, I'll check it out!
…On Mon, Sep 30, 2024, 11:09 AM Jimmy1i ***@***.***> wrote:
@xprabhudayal <https://github.com/xprabhudayal> It finally writes to
file, not pretty but works.
Screenshot.from.2024-09-30.13-37-12.png (view on web)
<https://github.com/user-attachments/assets/4510de22-2240-42be-ae10-5f14b1c3be06>
I'm using vasiliyeskin's
<#114> codes to make weak
models create a full PDF, hope it will work.
—
Reply to this email directly, view it on GitHub
<#98 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6UFP3D7XFKKR55D5U26Z7TZZDPZZAVCNFSM6AAAAABNRJ5ZL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBSGE2TKMRRGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@jli113 i was just waiting for the ollama to release llama3.2 11b in their library, coz its having the vision 👀. |
@jli113 hi, theres a service hyperbolic who were providing $10 initial credits to use the llm's from an endpoint, maybe can we integrate it ? |
Too much, that's more than ¥70. |
When using local LLM in Ollama, force models weaker than GPT4 to return answers in JSON format.