Skip to content

Commit

Permalink
correct Readme GPU example and API docstring (intel-analytics#9225)
Browse files Browse the repository at this point in the history
* update readme to correct GPU usage

* update from_pretrained supported low bit options

* fix stype check
  • Loading branch information
Chen, Zhentao authored Oct 19, 2023
1 parent d449e36 commit 9ecf572
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
1 change: 1 addition & 0 deletions python/llm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ You may apply INT4 optimizations to any Hugging Face *Transformers* model on Int
```python
#load Hugging Face Transformers model with INT4 optimizations
from bigdl.llm.transformers import AutoModelForCausalLM
import intel_extension_for_pytorch
model = AutoModelForCausalLM.from_pretrained('/path/to/model/', load_in_4bit=True)

#run the optimized model on Intel GPU
Expand Down
9 changes: 5 additions & 4 deletions python/llm/src/bigdl/llm/transformers/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,9 +60,9 @@ def from_pretrained(cls,
:param load_in_4bit: boolean value, True means load linear's weight to symmetric int 4.
Default to be False.
:param load_in_low_bit: str value, options are sym_int4, asym_int4, sym_int5, asym_int5
, sym_int8 or fp16. sym_int4 means symmetric int 4, asym_int4 means
asymmetric int 4, etc. Relevant low bit optimizations will
be applied to the model.
, sym_int8, nf3, nf4 or fp16. sym_int4 means symmetric int 4,
asym_int4 means asymmetric int 4, nf4 means 4-bit NormalFloat, etc.
Relevant low bit optimizations will be applied to the model.
:param optimize_model: boolean value, Whether to further optimize the low_bit llm model.
Default to be True.
:param modules_to_not_convert: list of str value, modules (nn.Module) that are skipped when
Expand Down Expand Up @@ -106,7 +106,8 @@ def load_convert(cls, q_k, optimize_model, *args, **kwargs):
from .convert import ggml_convert_low_bit
invalidInputError(q_k in ggml_tensor_qtype,
f"Unknown load_in_low_bit value: {q_k}, expected:"
f" sym_int4, asym_int4, sym_int5, asym_int5, sym_int8 or fp16.")
f" sym_int4, asym_int4, sym_int5, asym_int5, sym_int8, nf3, nf4 "
"or fp16.")
qtype = ggml_tensor_qtype[q_k]

# In case it needs a second try,
Expand Down

0 comments on commit 9ecf572

Please sign in to comment.