Skip to content

Commit

Permalink
Polish Readme for ModelScope-related examples (#12603)
Browse files Browse the repository at this point in the history
  • Loading branch information
ATMxsp01 authored Dec 26, 2024
1 parent 28737c2 commit ef585d3
Show file tree
Hide file tree
Showing 7 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions python/llm/example/GPU/HuggingFace/LLM/chatglm3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/chatglm3-6b`) or **ModelScope** (e.g. `ZhipuAI/chatglm3-6b`) repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
Expand Down Expand Up @@ -162,7 +162,7 @@ python ./streamchat.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --question
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the ***Hugging Face** (e.g. `THUDM/chatglm3-6b`) or **ModelScope** (e.g. `ZhipuAI/chatglm3-6b`) repo id for the ChatGLM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/chatglm3-6b'` for **Hugging Face** or `'ZhipuAI/chatglm3-6b'` for **ModelScope**.
- `--question QUESTION`: argument defining the question to ask. It is default to be `"晚上睡不着应该怎么办"`.
- `--disable-stream`: argument defining whether to stream chat. If include `--disable-stream` when running the script, the stream chat is disabled and `chat()` API is used.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
2 changes: 1 addition & 1 deletion python/llm/example/GPU/HuggingFace/LLM/codegeex2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the CodeGeeX2 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/codegeex2-6b'` for **Hugging Face** or `'ZhipuAI/codegeex-6b'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/codegeex2-6b`) or **ModelScope** (e.g. `ZhipuAI/codegeex-6b`) repo id for the CodeGeeX2 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/codegeex2-6b'` for **Hugging Face** or `'ZhipuAI/codegeex-6b'` for **ModelScope**.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'# language: Python\n# write a bubble sort function\n'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `128`.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
Expand Down
2 changes: 1 addition & 1 deletion python/llm/example/GPU/HuggingFace/LLM/glm4/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the GLM-4 model (e.g. `THUDM/glm-4-9b-chat`) to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4-9b-chat'` for **Hugging Face** or `'ZhipuAI/glm-4-9b-chat'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/glm-4-9b-chat`) or **ModelScope** (e.g. `ZhipuAI/glm-4-9b-chat`) repo id for the GLM-4 model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4-9b-chat'` for **Hugging Face** or `'ZhipuAI/glm-4-9b-chat'` for **ModelScope**.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
Expand Down
2 changes: 1 addition & 1 deletion python/llm/example/GPU/HuggingFace/LLM/minicpm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the MiniCPM model (e.g. `openbmb/MiniCPM-2B-sft-bf16` or `openbmb/MiniCPM-1B-sft-bf16`) to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'` for **Hugging Face** and `'OpenBMB/MiniCPM-2B-sft-bf16'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `openbmb/MiniCPM-2B-sft-bf16` or `openbmb/MiniCPM-1B-sft-bf16`) or **ModelScope** (e.g. `OpenBMB/MiniCPM-2B-sft-bf16` or `OpenBMB/MiniCPM-1B-sft-bf16`) repo id for the MiniCPM model to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-2B-sft-bf16'` for **Hugging Face** and `'OpenBMB/MiniCPM-2B-sft-bf16'` for **ModelScope**.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
Expand Down
2 changes: 1 addition & 1 deletion python/llm/example/GPU/HuggingFace/LLM/minicpm3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the MiniCPM3 model (e.g. `openbmb/MiniCPM3-4B`) to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM3-4B'` for **Hugging Face** or `'OpenBMB/MiniCPM3-4B'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `openbmb/MiniCPM3-4B`) or **ModelScope** (e.g. `OpenBMB/MiniCPM3-4B`) repo id for the MiniCPM3 model to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM3-4B'` for **Hugging Face** or `'OpenBMB/MiniCPM3-4B'` for **ModelScope**.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
- `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ set SYCL_CACHE_PERSISTENT=1
> For chatting in streaming mode, it is recommended to set the environment variable `PYTHONUNBUFFERED=1`.
Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the MiniCPM-V-2_6 (e.g. `openbmb/MiniCPM-V-2_6`) to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-V-2_6'` for **Hugging Face** or `'OpenBMB/MiniCPM-V-2_6'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `openbmb/MiniCPM-V-2_6`) or **ModelScope** (e.g. `OpenBMB/MiniCPM-V-2_6`) repo id for the MiniCPM-V-2_6 to be downloaded, or the path to the checkpoint folder. It is default to be `'openbmb/MiniCPM-V-2_6'` for **Hugging Face** or `'OpenBMB/MiniCPM-V-2_6'` for **ModelScope**.
- `--lowbit-path LOWBIT_MODEL_PATH`: argument defining the path to save/load the model with IPEX-LLM low-bit optimization. If it is an empty string, the original pretrained model specified by `REPO_ID_OR_MODEL_PATH` will be loaded. If it is an existing path, the saved model with low-bit optimization in `LOWBIT_MODEL_PATH` will be loaded. If it is a non-existing path, the original pretrained model specified by `REPO_ID_OR_MODEL_PATH` will be loaded, and the optimized low-bit model will be saved into `LOWBIT_MODEL_PATH`. It is default to be `''`, i.e. an empty string.
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
```

Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the GLM-4V model (e.g. `THUDM/glm-4v-9b`) to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4v-9b'` for **Hugging Face** or `'ZhipuAI/glm-4v-9b'` for **ModelScope**.
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** (e.g. `THUDM/glm-4v-9b`) or **ModelScope** (e.g. `ZhipuAI/glm-4v-9b`) repo id for the GLM-4V model to be downloaded, or the path to the checkpoint folder. It is default to be `'THUDM/glm-4v-9b'` for **Hugging Face** or `'ZhipuAI/glm-4v-9b'` for **ModelScope**.
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
Expand Down

0 comments on commit ef585d3

Please sign in to comment.