From f0aaa130a99431a3404262ce0c60ffdbadcfb873 Mon Sep 17 00:00:00 2001 From: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com> Date: Thu, 30 May 2024 17:40:18 +0800 Subject: [PATCH] Update miniconda/anaconda -> miniforge in documentation (#11176) * Update miniconda/anaconda -> miniforge in installation guide * Update for all Quickstart * further fix for docs --- .../DockerGuides/docker_pytorch_inference_gpu.md | 2 +- .../Overview/KeyFeatures/multi_gpus_selection.md | 2 +- .../source/doc/LLM/Overview/install_cpu.md | 2 +- .../source/doc/LLM/Overview/install_gpu.md | 10 +++++----- .../doc/LLM/Quickstart/continue_quickstart.md | 6 +++--- .../source/doc/LLM/Quickstart/install_linux_gpu.md | 8 ++++---- .../doc/LLM/Quickstart/install_windows_gpu.md | 14 +++++++------- .../llama3_llamacpp_ollama_quickstart.md | 8 ++++---- .../doc/LLM/Quickstart/llama_cpp_quickstart.md | 8 ++++---- .../source/doc/LLM/Quickstart/ollama_quickstart.md | 8 ++++---- .../source/doc/LLM/Quickstart/webui_quickstart.md | 10 +++++----- 11 files changed, 39 insertions(+), 39 deletions(-) diff --git a/docs/readthedocs/source/doc/LLM/DockerGuides/docker_pytorch_inference_gpu.md b/docs/readthedocs/source/doc/LLM/DockerGuides/docker_pytorch_inference_gpu.md index 0c69a5a4820..76409384721 100644 --- a/docs/readthedocs/source/doc/LLM/DockerGuides/docker_pytorch_inference_gpu.md +++ b/docs/readthedocs/source/doc/LLM/DockerGuides/docker_pytorch_inference_gpu.md @@ -5,7 +5,7 @@ We can run PyTorch Inference Benchmark, Chat Service and PyTorch Examples on Int ```eval_rst .. note:: - The current Windows + WSL + Docker solution only supports Arc series dGPU. For Windows users with MTL iGPU, it is recommended to install directly via pip install in Anaconda Prompt. Refer to `this guide `_. + The current Windows + WSL + Docker solution only supports Arc series dGPU. For Windows users with MTL iGPU, it is recommended to install directly via pip install in Miniforge Prompt. Refer to `this guide `_. ``` diff --git a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.md b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.md index 88aee516a4b..1bacc1e84a7 100644 --- a/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.md +++ b/docs/readthedocs/source/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.md @@ -10,7 +10,7 @@ The `sycl-ls` tool enumerates a list of devices available in the system. You can .. tabs:: .. tab:: Windows - Please make sure you are using CMD (Anaconda Prompt if using conda): + Please make sure you are using CMD (Miniforge Prompt if using conda): .. code-block:: cmd diff --git a/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md b/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md index c19ddd4cf52..990e3f0910f 100644 --- a/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md +++ b/docs/readthedocs/source/doc/LLM/Overview/install_cpu.md @@ -51,7 +51,7 @@ Here list the recommended hardware and OS for smooth IPEX-LLM optimization exper For optimal performance with LLM models using IPEX-LLM optimizations on Intel CPUs, here are some best practices for setting up environment: -First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment: +First we recommend using [Conda](https://conda-forge.org/download/) to create a python 3.11 enviroment: ```eval_rst .. tabs:: diff --git a/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md b/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md index 7ee5a60f52b..52303ef528b 100644 --- a/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md +++ b/docs/readthedocs/source/doc/LLM/Overview/install_gpu.md @@ -45,7 +45,7 @@ If you have driver version lower than `31.0.101.5122`, it is recommended to [**u ### Install IPEX-LLM #### Install IPEX-LLM From PyPI -We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment. +We recommend using [Miniforge](https://conda-forge.org/download/) to create a python 3.11 enviroment. ```eval_rst .. important:: @@ -108,7 +108,7 @@ pip install --pre --upgrade ipex-llm[xpu] To use GPU acceleration on Windows, several environment variables are required before running a GPU example: - + Please note that you need to set these environment variables again once you have a new Miniforge Prompt window. --> ## Linux @@ -434,7 +434,7 @@ IPEX-LLM GPU support on Linux has been verified on: ### Install IPEX-LLM #### Install IPEX-LLM From PyPI -We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment: +We recommend using [Miniforge](https://conda-forge.org/download/ to create a python 3.11 enviroment: ```eval_rst .. important:: diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md index 42bab8b1c8e..6862311814b 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/continue_quickstart.md @@ -48,7 +48,7 @@ Now we need to pull a model for coding. Here we use [CodeQWen1.5-7B](https://hug .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: cmd @@ -72,7 +72,7 @@ Start by creating a file named `Modelfile` with the following content: FROM codeqwen:latest PARAMETER num_ctx 4096 ``` -Next, use the following commands in the terminal (Linux) or Anaconda Prompt (Windows) to create a new model in Ollama named `codeqwen:latest-continue`: +Next, use the following commands in the terminal (Linux) or Miniforge Prompt (Windows) to create a new model in Ollama named `codeqwen:latest-continue`: ```bash @@ -81,7 +81,7 @@ Next, use the following commands in the terminal (Linux) or Anaconda Prompt (Win After creation, run `ollama list` to see `codeqwen:latest-continue` in the list of models. -Finally, preload the new model by executing the following command in a new terminal (Linux) or Anaconda prompt (Windows): +Finally, preload the new model by executing the following command in a new terminal (Linux) or Miniforge Prompt (Windows): ```bash ollama run codeqwen:latest-continue diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/install_linux_gpu.md b/docs/readthedocs/source/doc/LLM/Quickstart/install_linux_gpu.md index cef7063b279..47d8f4a3eeb 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/install_linux_gpu.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/install_linux_gpu.md @@ -153,10 +153,10 @@ sudo dpkg -i *.deb ### Setup Python Environment -Download and install the Miniconda as follows if you don't have conda installed on your machine: +Download and install the Miniforge as follows if you don't have conda installed on your machine: ```bash - wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh - bash Miniconda3-latest-Linux-x86_64.sh + wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh + bash Miniforge3-Linux-x86_64.sh source ~/.bashrc ``` @@ -259,7 +259,7 @@ To use GPU acceleration on Linux, several environment variables are required or Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface.co/microsoft/phi-1_5) model, a 1.3 billion parameter LLM for this demostration. Follow the steps below to setup and run the model, and observe how it responds to a prompt "What is AI?". -* Step 1: Open the **Anaconda Prompt** and activate the Python environment `llm` you previously created: +* Step 1: Activate the Python environment `llm` you previously created: ```bash conda activate llm ``` diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md b/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md index 6da8ed9ff7a..fe94002f7fe 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md @@ -39,13 +39,13 @@ Download and install the latest GPU driver from the [official Intel download pag ### Setup Python Environment -Visit [Miniconda installation page](https://docs.anaconda.com/free/miniconda/), download the **Miniconda installer for Windows**, and follow the instructions to complete the installation. +Visit [Miniforge installation page](https://conda-forge.org/download/), download the **Miniforge installer for Windows**, and follow the instructions to complete the installation.
- +
-After installation, open the **Anaconda Prompt**, create a new python environment `llm`: +After installation, open the **Miniforge Prompt**, create a new python environment `llm`: ```cmd conda create -n llm python=3.11 libuv ``` @@ -83,7 +83,7 @@ With the `llm` environment active, use `pip` to install `ipex-llm` for GPU. Choo You can verify if `ipex-llm` is successfully installed following below steps. ### Step 1: Runtime Configurations -* Open the **Anaconda Prompt** and activate the Python environment `llm` you previously created: +* Open the **Miniforge Prompt** and activate the Python environment `llm` you previously created: ```cmd conda activate llm ``` @@ -117,9 +117,9 @@ You can verify if `ipex-llm` is successfully installed following below steps. ### Step 2: Run Python Code -* Launch the Python interactive shell by typing `python` in the Anaconda prompt window and then press Enter. +* Launch the Python interactive shell by typing `python` in the Miniforge Prompt window and then press Enter. -* Copy following code to Anaconda prompt **line by line** and press Enter **after copying each line**. +* Copy following code to Miniforge Prompt **line by line** and press Enter **after copying each line**. ```python import torch from ipex_llm.transformers import AutoModel,AutoModelForCausalLM @@ -211,7 +211,7 @@ Now let's play with a real LLM. We'll be using the [Qwen-1.8B-Chat](https://hugg .. tab:: ModelScope - Please first run following command in Anaconda Prompt to install ModelScope: + Please first run following command in Miniforge Prompt to install ModelScope: .. code-block:: cmd diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md index 98f7b529507..0576cc98d8a 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md @@ -75,7 +75,7 @@ Under your current directory, exceuting below command to do inference with Llama .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash @@ -94,7 +94,7 @@ Under your current directory, you can also execute below command to have interac .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash @@ -138,7 +138,7 @@ Launch the Ollama service: .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash @@ -183,7 +183,7 @@ Keep the Ollama service on and open another terminal and run llama3 with `ollama .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md index 39d60e66e2f..1373a781489 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md @@ -49,7 +49,7 @@ To use `llama.cpp` with IPEX-LLM, first ensure that `ipex-llm[cpp]` is installed .. note:: - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: cmd @@ -86,7 +86,7 @@ Then you can use following command to initialize `llama.cpp` with IPEX-LLM: .. tab:: Windows - Please run the following command with **administrator privilege in Anaconda Prompt**. + Please run the following command with **administrator privilege in Miniforge Prompt**. .. code-block:: bash @@ -127,7 +127,7 @@ To use GPU acceleration, several environment variables are required or recommend .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash @@ -169,7 +169,7 @@ Before running, you should download or copy community GGUF model to your current .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md index 5e602691244..fa81d73a24e 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md @@ -39,7 +39,7 @@ Activate the `llm-cpp` conda environment and initialize Ollama by executing the .. tab:: Windows - Please run the following command with **administrator privilege in Anaconda Prompt**. + Please run the following command with **administrator privilege in Miniforge Prompt**. .. code-block:: bash @@ -76,7 +76,7 @@ You may launch the Ollama service as below: .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash @@ -149,7 +149,7 @@ model**, e.g. `dolphin-phi`. .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash @@ -187,7 +187,7 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile` .. tab:: Windows - Please run the following command in Anaconda Prompt. + Please run the following command in Miniforge Prompt. .. code-block:: bash diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md index 0e931eeeb0a..3aab958928f 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md @@ -30,7 +30,7 @@ Download the `text-generation-webui` with IPEX-LLM integrations from [this link] #### Install Dependencies -Open **Anaconda Prompt** and activate the conda environment you have created in [section 1](#1-install-ipex-llm), e.g., `llm`. +Open **Miniforge Prompt** and activate the conda environment you have created in [section 1](#1-install-ipex-llm), e.g., `llm`. ``` conda activate llm ``` @@ -50,7 +50,7 @@ pip install -r extensions/openai/requirements.txt ### 3 Start the WebUI Server #### Set Environment Variables -Configure oneAPI variables by running the following command in **Anaconda Prompt**: +Configure oneAPI variables by running the following command in **Miniforge Prompt**: ```eval_rst .. note:: @@ -67,7 +67,7 @@ set BIGDL_LLM_XMX_DISABLED=1 ``` #### Launch the Server -In **Anaconda Prompt** with the conda environment `llm` activated, navigate to the `text-generation-webui` folder and execute the following commands (You can optionally lanch the server with or without the API service): +In **Miniforge Prompt** with the conda environment `llm` activated, navigate to the `text-generation-webui` folder and execute the following commands (You can optionally lanch the server with or without the API service): ##### without API service ```cmd @@ -154,7 +154,7 @@ Enter prompts into the textbox at the bottom and press the **Generate** button t #### Exit the WebUI -To shut down the WebUI server, use **Ctrl+C** in the **Anaconda Prompt** terminal where the WebUI Server is runing, then close your browser tab. +To shut down the WebUI server, use **Ctrl+C** in the **Miniforge Prompt** terminal where the WebUI Server is runing, then close your browser tab. ### 5. Advanced Usage @@ -203,7 +203,7 @@ The first response to user prompt might be slower than expected, with delays of During model loading, you may encounter an **ImportError** like `ImportError: This modeling file requires the following packages that were not found in your environment`. This indicates certain packages required by the model are absent from your environment. Detailed instructions for installing these necessary packages can be found at the bottom of the error messages. Take the following steps to fix these errors: -- Exit the WebUI Server by pressing **Ctrl+C** in the **Anaconda Prompt** terminal. +- Exit the WebUI Server by pressing **Ctrl+C** in the **Miniforge Prompt** terminal. - Install the missing pip packages as specified in the error message - Restart the WebUI Server.