(v0.5.0) 【Starry Night Fireworks】 Thank you all, a thousand stars have gathered at the party! #127
heshengtao
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Click here to download the comfyui pack containing the LLM party
点击这里,下载包含LLM party的comfyui便携包
✨v0.5.0✨【星夜烟火】【Starry Night Fireworks】
This release includes the following features:
runtime\python.exe api_v2.py
in the GPT-sovits project folder. Additionally, the chatTTS node has been moved to comfyui LLM mafia. The reason is that chatTTS has many dependencies, and its license on PyPi is CC BY-NC 4.0, which is a non-commercial license. Even though the chatTTS GitHub project is under the AGPL license, we moved the chatTTS node to comfyui LLM mafia to avoid unnecessary trouble. We hope everyone understands!config.ini
file. When you use thefix node
function on the API LLM loader node, it will automatically read the updated API key and Base URL from theconfig.ini
file.http://127.0.0.1:8817/v1/
. You need to connect the start and end of your workflow to the 'Start Workflow' and 'End Workflow', then save in API format to theworkflow_api
folder. Then, in any frontend that can call the OpenAI interface, inputmodel name=<your workflow name without the .json extension>
,Base URL=http://127.0.0.1:8817/v1/
, and the API key can be filled with any value.docker run -d -p 8080:8080 searxng/searxng
, and access it usinghttp://localhost:8080
. You can fill in this URLhttp://localhost:8080
in the party's searxng tool, and then you can use searxng as a tool for LLM.本次发行包含如下功能:
runtime\python.exe api_v2.py
启动API服务。此外,chatTTS节点被移动到了comfyui LLM mafia中。原因是chatTTS的依赖库较多,且在PyPi中的许可证为CC BY-NC 4.0,这是一个非商用许可证。即使chatTTS的github项目是AGPL协议的,我们还是为了避免不必要的麻烦,将chatTTS节点移到了comfyui LLM mafia中。希望大家能够理解!http://127.0.0.1:8817/v1/
上的openai接口。你需要将你的工作流的开始和结尾连上开始工作流和结束工作流,然后以API格式保存到workflow_api文件夹,然后在其他可以调用openai接口的前端输入model name=<你的工作流名不包含.json后缀名>,Base URL=http://127.0.0.1:8817/v1/
,API key随便填。设置
中选中你保存的工作流,并且在聊天
中与你的工作流智能体聊天。docker run -d -p 8080:8080 searxng/searxng
来启动它,然后使用http://localhost:8080
来访问它。你可以将http://localhost:8080
这个URL填入party的searxng工具,就可以将searxng当作LLM的一个工具使用了。This discussion was created from the release (v0.5.0) 【Starry Night Fireworks】 Thank you all, a thousand stars have gathered at the party!.
Beta Was this translation helpful? Give feedback.
All reactions