Releases: undreamai/LLMUnity
Releases Β· undreamai/LLMUnity
Release v2.0.1
π Features
- Implement backend with DLLs (PR: #163)
- Separate LLM from LLMClient functionality (PR: #163)
- Add sample with RAG and LLM integration (PR: #170)
Release v1.2.9
π Fixes
- disable GPU compilation when running on CPU (PR: #159)
Release v1.2.8
π Features
- Switch to llamafile v0.8.6 (PR: #155)
- Add phi-3 support (PR: #156)
Release v1.2.7
π Features
- Add Llama 3 and Vicuna chat templates (PR: #145)
π¦ General
- Use the context size of the model by default for longer history (PR: #147)
Release v1.2.6
π Features
- Add documentation (PR: #135)
π Fixes
- Add server security for interceptions from external llamafile servers (PR: #132)
- Adapt server security for macOS (PR: #137)
π¦ General
- Add sample to demonstrates the async functionality (PR: #136)
Release v1.2.5
π Fixes
- Add to chat history only if the response is not null (PR: #123)
- Allow SetTemplate function in Runtime (PR: #129)
Release v1.2.4
π Features
- Use llamafile v0.6.2 (PR: #111)
- Pure text completion functionality (PR: #115)
- Allow change of roles after starting the interaction (PR: #120)
π Fixes
- use Debug.LogError instead of Exception for more verbosity (PR: #113)
- Trim chat responses (PR: #118)
- Fallback to CPU for macOS with unsupported GPU (PR: #119)
- Removed duplicate EditorGUI.EndChangeCheck() (PR: #110)
π¦ General
- Provide access to LLMUnity version (PR: #117)
- Rename to "LLM for Unity" (PR: #121)
Release v1.2.3
π Fixes
- Fix async server 2 (PR: #108)
Release v1.2.2
π Fixes
- use namespaces in all classes (PR: #104)
- await separately in StartServer (PR: #107)
Release v1.2.1
π Fixes
- Kill server after Unity crash (PR: #101)
- Persist chat template on remote servers (PR: #103)