From bb053c0b22b4ed9236e47da8e214b57a09036962 Mon Sep 17 00:00:00 2001 From: Jason Dai Date: Mon, 25 Mar 2024 19:16:57 +0800 Subject: [PATCH] Update readme (#5166) --- README.md | 4 ++-- docs/readthedocs/source/index.rst | 12 ++++++++++-- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index c1da4c943e2..5c74efe53f3 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ > [!IMPORTANT] -> ***`bigdl-llm` has now become `ipex-llm`, and our future development will move to the [IPEX-LLM](https://github.com/intel-analytics/BigDL) project*** +> ***`bigdl-llm` has now become `ipex-llm`, and our future development will move to the [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) project*** ---
@@ -16,7 +16,7 @@ > *It is built on the excellent work of [llama.cpp](https://github.com/ggerganov/llama.cpp), [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), [qlora](https://github.com/artidoro/qlora), [gptq](https://github.com/IST-DASLab/gptq), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [awq](https://github.com/mit-han-lab/llm-awq), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [gptq_for_llama](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [chatglm.cpp](https://github.com/li-plus/chatglm.cpp), [redpajama.cpp](https://github.com/togethercomputer/redpajama.cpp), [gptneox.cpp](https://github.com/byroneverson/gptneox.cpp), [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp/), etc.* ### Latest update 🔥 -- [2024/03] 🔔🔔🔔 **`bigdl-llm` has now become [`ipex-llm`](https://github.com/intel-analytics/BigDL); see the migration guide [here](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/bigdl_llm_migration.html).** +- [2024/03] 🔔🔔🔔 **`bigdl-llm` has now become [`ipex-llm`](https://github.com/intel-analytics/ipex-llm); see the migration guide [here](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/bigdl_llm_migration.html).** - [2024/03] **LangChain** added support for `bigdl-llm`; see the details [here](https://python.langchain.com/docs/integrations/llms/bigdl). - [2024/02] `bigdl-llm` now supports directly loading model from [ModelScope](python/llm/example/GPU/ModelScope-Models) ([魔搭](python/llm/example/CPU/ModelScope-Models)). - [2024/02] `bigdl-llm` added inital **INT2** support (based on llama.cpp [IQ2](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF-IQ2) mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM. diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst index 34e6b6b8388..3988fbcd5be 100644 --- a/docs/readthedocs/source/index.rst +++ b/docs/readthedocs/source/index.rst @@ -1,7 +1,15 @@ .. meta:: :google-site-verification: S66K6GAclKw1RroxU0Rka_2d1LZFVe27M0gRneEsIVI -🔔🔔🔔 ``bigdl-llm`` **has now become** ``ipex-llm``, **and our future development will move to the** `IPEX-LLM `_ **project** 🔔🔔🔔 +.. important:: + + .. raw:: html + +

+ + bigdl-llm has now become ipex-llm, and our future development will move to the IPEX-LLM project. + +

################################################ The BigDL Project @@ -24,7 +32,7 @@ BigDL-LLM ============================================ Latest update 🔥 ============================================ -- [2024/03] 🔔🔔🔔 ``bigdl-llm`` **has now become** `ipex-llm `_; see the migration guide `here `_. +- [2024/03] 🔔🔔🔔 ``bigdl-llm`` **has now become** `ipex-llm `_; see the migration guide `here `_. - [2024/03] **LangChain** added support for ``bigdl-llm``; see the details `here `_. - [2024/02] ``bigdl-llm`` now supports directly loading model from `ModelScope `_ (`魔搭 `_). - [2024/02] ``bigdl-llm`` added inital **INT2** support (based on llama.cpp `IQ2 `_ mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM.