You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is neither a bug nor feature request, and possibly I'm just completely missing something here. I'm setting this up and I have LocalAI installed on my unRAID server (which has an nVidia Tesla P4 installed and GPU drivers). LocalAI is great but finding models isn't the most intuitive. Not finding your model in the list, I downloaded Home-3B-v3.f16.gguf into my local_ai/models/ dir. However, for LocalAI to use it, I'm pretty sure I need a .yaml file to go with it. I'm going to use the yaml file llama-3.3-70b-instruct.yaml which is, as the name suggests, LocalAI's .yaml file for the Llama v3.3 70b Instruct model.
context_size: 8192
f16: true
function:
disable_no_action: true
grammar:
disabled: true
response_regex:
- <function=(?P<name>\w+)>(?P<arguments>.*)</function>
map: true
name: llama-3.3-70b-instruct
parameters:
model: Llama-3.3-70B-Instruct.Q4_K_M.gguf
stopwords:
- <|im_end|>
- <dummy32000>
- <|eot_id|>
- <|end_of_text|>
template:
chat: |
{{.Input }}
<|start_header_id|>assistant<|end_header_id|>
chat_message: |
<|start_header_id|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}}<|end_header_id|>
{{ if .FunctionCall -}}
Function call:
{{ else if eq .RoleName "tool" -}}
Function response:
{{ end -}}
{{ if .Content -}}
{{.Content -}}
{{ else if .FunctionCall -}}
{{ toJson .FunctionCall -}}
{{ end -}}
<|eot_id|>
completion: |
{{.Input}}
function: |
<|start_header_id|>system<|end_header_id|>
You have access to the following functions:
{{range .Functions}}
Use the function '{{.Name}}' to '{{.Description}}'
{{toJson .Parameters}}
{{end}}
Think very carefully before calling functions.
If a you choose to call a function ONLY reply in the following format with no prefix or suffix:
<function=example_function_name>{{`{{"example_name": "example_value"}}`}}</function>
Reminder:
- If looking for real time information use relevant functions before falling back to searching on internet
- Function calls MUST follow the specified format, start with <function= and end with </function>
- Required parameters MUST be specified
- Only call one function at a time
- Put the entire function call reply on one line
<|eot_id|>
{{.Input }}
<|start_header_id|>assistant<|end_header_id|>
I'm a long time IT/InfoSec professional and know my way around many things, but I'm new to LLMs and AI/ML. I figure it's time I jump in, and I've desperately been wanting to do an offline voice assistant for HA, so this is perfect. Maybe.
I'm assuming LocalAI needs this yaml file in order for the API to interact with the LLM, but I have no idea how create that for your model, based off the instructions. Like I said, I must be missing something and I'm going to feel like an idiot after finding out what that is. You do mention LocalAI as an option and mock me further by listing it as (Easier) 😅 but I didn't see additional instructions or information for implementing your model using a LocalAI backend. I do see you mention the use of the above Llama 3.3 70b Instruct model, but using LMStudio as the backend.
Because my server runs unRAID and "Apps" are installed as docker containers and configured via templates through the WebUI, I went with LocalAI as the interface is clean. I do have Ollama installed, along with open-webui as the front end for it, but I'd prefer to use LocalAI here if possible (unless there's a really good reason why I should use the Ollama backend instead).
I would greatly appreciate some guidance here. Thanks in advance.
The text was updated successfully, but these errors were encountered:
micfogas
changed the title
How to Request: using with LocalAI on unRAID
How-To Request: using with LocalAI on unRAID
Jan 26, 2025
Hi Alex,
This is neither a bug nor feature request, and possibly I'm just completely missing something here. I'm setting this up and I have LocalAI installed on my unRAID server (which has an nVidia Tesla P4 installed and GPU drivers). LocalAI is great but finding models isn't the most intuitive. Not finding your model in the list, I downloaded
Home-3B-v3.f16.gguf
into mylocal_ai/models/
dir. However, for LocalAI to use it, I'm pretty sure I need a .yaml file to go with it. I'm going to use the yaml filellama-3.3-70b-instruct.yaml
which is, as the name suggests, LocalAI's .yaml file for the Llama v3.3 70b Instruct model.llama-3.3-70b-instruct.yaml.log (added .log to the filename so I could attach it)
I'm a long time IT/InfoSec professional and know my way around many things, but I'm new to LLMs and AI/ML. I figure it's time I jump in, and I've desperately been wanting to do an offline voice assistant for HA, so this is perfect. Maybe.
I'm assuming LocalAI needs this yaml file in order for the API to interact with the LLM, but I have no idea how create that for your model, based off the instructions. Like I said, I must be missing something and I'm going to feel like an idiot after finding out what that is. You do mention LocalAI as an option and mock me further by listing it as (Easier) 😅 but I didn't see additional instructions or information for implementing your model using a LocalAI backend. I do see you mention the use of the above Llama 3.3 70b Instruct model, but using LMStudio as the backend.
Because my server runs unRAID and "Apps" are installed as docker containers and configured via templates through the WebUI, I went with LocalAI as the interface is clean. I do have Ollama installed, along with open-webui as the front end for it, but I'd prefer to use LocalAI here if possible (unless there's a really good reason why I should use the Ollama backend instead).
I would greatly appreciate some guidance here. Thanks in advance.
The text was updated successfully, but these errors were encountered: