Skip to content

Commit

Permalink
Update readme (#5166)
Browse files Browse the repository at this point in the history
  • Loading branch information
jason-dai authored Mar 25, 2024
1 parent de31e27 commit bb053c0
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 4 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
> [!IMPORTANT]
> ***`bigdl-llm` has now become `ipex-llm`, and our future development will move to the [IPEX-LLM](https://github.com/intel-analytics/BigDL) project***
> ***`bigdl-llm` has now become `ipex-llm`, and our future development will move to the [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) project***
---
<div align="center">
Expand All @@ -16,7 +16,7 @@
> *It is built on the excellent work of [llama.cpp](https://github.com/ggerganov/llama.cpp), [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), [qlora](https://github.com/artidoro/qlora), [gptq](https://github.com/IST-DASLab/gptq), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [awq](https://github.com/mit-han-lab/llm-awq), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [gptq_for_llama](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [chatglm.cpp](https://github.com/li-plus/chatglm.cpp), [redpajama.cpp](https://github.com/togethercomputer/redpajama.cpp), [gptneox.cpp](https://github.com/byroneverson/gptneox.cpp), [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp/), etc.*
### Latest update 🔥
- [2024/03] 🔔🔔🔔 **`bigdl-llm` has now become [`ipex-llm`](https://github.com/intel-analytics/BigDL); see the migration guide [here](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/bigdl_llm_migration.html).**
- [2024/03] 🔔🔔🔔 **`bigdl-llm` has now become [`ipex-llm`](https://github.com/intel-analytics/ipex-llm); see the migration guide [here](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/bigdl_llm_migration.html).**
- [2024/03] **LangChain** added support for `bigdl-llm`; see the details [here](https://python.langchain.com/docs/integrations/llms/bigdl).
- [2024/02] `bigdl-llm` now supports directly loading model from [ModelScope](python/llm/example/GPU/ModelScope-Models) ([魔搭](python/llm/example/CPU/ModelScope-Models)).
- [2024/02] `bigdl-llm` added inital **INT2** support (based on llama.cpp [IQ2](python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF-IQ2) mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM.
Expand Down
12 changes: 10 additions & 2 deletions docs/readthedocs/source/index.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,15 @@
.. meta::
:google-site-verification: S66K6GAclKw1RroxU0Rka_2d1LZFVe27M0gRneEsIVI

🔔🔔🔔 ``bigdl-llm`` **has now become** ``ipex-llm``, **and our future development will move to the** `IPEX-LLM <https://github.com/intel-analytics/BigDL>`_ **project** 🔔🔔🔔
.. important::

.. raw:: html

<p>
<strong><em>
<code><span>bigdl-llm</span></code> has now become <code><span>ipex-llm</span></code>, and our future development will move to the <a href="https://github.com/intel-analytics/ipex-llm">IPEX-LLM</a> project.
</em></strong>
</p>

################################################
The BigDL Project
Expand All @@ -24,7 +32,7 @@ BigDL-LLM
============================================
Latest update 🔥
============================================
- [2024/03] 🔔🔔🔔 ``bigdl-llm`` **has now become** `ipex-llm <https://github.com/intel-analytics/BigDL>`_; see the migration guide `here <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/bigdl_llm_migration.html>`_.
- [2024/03] 🔔🔔🔔 ``bigdl-llm`` **has now become** `ipex-llm <https://github.com/intel-analytics/ipex-llm>`_; see the migration guide `here <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/bigdl_llm_migration.html>`_.
- [2024/03] **LangChain** added support for ``bigdl-llm``; see the details `here <https://python.langchain.com/docs/integrations/llms/bigdl>`_.
- [2024/02] ``bigdl-llm`` now supports directly loading model from `ModelScope <https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/ModelScope-Models>`_ (`魔搭 <https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/ModelScope-Models>`_).
- [2024/02] ``bigdl-llm`` added inital **INT2** support (based on llama.cpp `IQ2 <https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF-IQ2>`_ mechanism), which makes it possible to run large-size LLM (e.g., Mixtral-8x7B) on Intel GPU with 16GB VRAM.
Expand Down

0 comments on commit bb053c0

Please sign in to comment.