-
ipex-llm Public
Forked from intel-analytics/ipex-llmAccelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc,…
Python Apache License 2.0 UpdatedDec 24, 2024 -
anything-llm Public
Forked from Mintplex-Labs/anything-llmThe all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
JavaScript MIT License UpdatedSep 20, 2024 -
-