Skip to content

Commit

Permalink
Update miniconda/anaconda -> miniforge in documentation (#11176)
Browse files Browse the repository at this point in the history
* Update miniconda/anaconda -> miniforge in installation guide

* Update for all Quickstart

* further fix for docs
  • Loading branch information
Oscilloscope98 authored May 30, 2024
1 parent c0f1be6 commit f0aaa13
Show file tree
Hide file tree
Showing 11 changed files with 39 additions and 39 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ We can run PyTorch Inference Benchmark, Chat Service and PyTorch Examples on Int
```eval_rst
.. note::
The current Windows + WSL + Docker solution only supports Arc series dGPU. For Windows users with MTL iGPU, it is recommended to install directly via pip install in Anaconda Prompt. Refer to `this guide <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html>`_.
The current Windows + WSL + Docker solution only supports Arc series dGPU. For Windows users with MTL iGPU, it is recommended to install directly via pip install in Miniforge Prompt. Refer to `this guide <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html>`_.
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The `sycl-ls` tool enumerates a list of devices available in the system. You can
.. tabs::
.. tab:: Windows
Please make sure you are using CMD (Anaconda Prompt if using conda):
Please make sure you are using CMD (Miniforge Prompt if using conda):
.. code-block:: cmd
Expand Down
2 changes: 1 addition & 1 deletion docs/readthedocs/source/doc/LLM/Overview/install_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Here list the recommended hardware and OS for smooth IPEX-LLM optimization exper

For optimal performance with LLM models using IPEX-LLM optimizations on Intel CPUs, here are some best practices for setting up environment:

First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment:
First we recommend using [Conda](https://conda-forge.org/download/) to create a python 3.11 enviroment:

```eval_rst
.. tabs::
Expand Down
10 changes: 5 additions & 5 deletions docs/readthedocs/source/doc/LLM/Overview/install_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ If you have driver version lower than `31.0.101.5122`, it is recommended to [**u
### Install IPEX-LLM
#### Install IPEX-LLM From PyPI

We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment.
We recommend using [Miniforge](https://conda-forge.org/download/) to create a python 3.11 enviroment.

```eval_rst
.. important::
Expand Down Expand Up @@ -108,7 +108,7 @@ pip install --pre --upgrade ipex-llm[xpu]

To use GPU acceleration on Windows, several environment variables are required before running a GPU example:

<!-- Make sure you are using CMD (Anaconda Prompt if using conda) as PowerShell is not supported, and configure oneAPI environment variables with:
<!-- Make sure you are using CMD (Miniforge Prompt if using conda) as PowerShell is not supported, and configure oneAPI environment variables with:
```cmd
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
Expand Down Expand Up @@ -157,11 +157,11 @@ If you met error when importing `intel_extension_for_pytorch`, please ensure tha
conda install libuv
```

<!-- * For oneAPI installed using the Offline installer, make sure you have configured oneAPI environment variables in your Anaconda Prompt through
<!-- * For oneAPI installed using the Offline installer, make sure you have configured oneAPI environment variables in your Miniforge Prompt through
```cmd
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
```
Please note that you need to set these environment variables again once you have a new Anaconda Prompt window. -->
Please note that you need to set these environment variables again once you have a new Miniforge Prompt window. -->

## Linux

Expand Down Expand Up @@ -434,7 +434,7 @@ IPEX-LLM GPU support on Linux has been verified on:
### Install IPEX-LLM
#### Install IPEX-LLM From PyPI

We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment:
We recommend using [Miniforge](https://conda-forge.org/download/ to create a python 3.11 enviroment:

```eval_rst
.. important::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Now we need to pull a model for coding. Here we use [CodeQWen1.5-7B](https://hug
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: cmd
Expand All @@ -72,7 +72,7 @@ Start by creating a file named `Modelfile` with the following content:
FROM codeqwen:latest
PARAMETER num_ctx 4096
```
Next, use the following commands in the terminal (Linux) or Anaconda Prompt (Windows) to create a new model in Ollama named `codeqwen:latest-continue`:
Next, use the following commands in the terminal (Linux) or Miniforge Prompt (Windows) to create a new model in Ollama named `codeqwen:latest-continue`:


```bash
Expand All @@ -81,7 +81,7 @@ Next, use the following commands in the terminal (Linux) or Anaconda Prompt (Win

After creation, run `ollama list` to see `codeqwen:latest-continue` in the list of models.

Finally, preload the new model by executing the following command in a new terminal (Linux) or Anaconda prompt (Windows):
Finally, preload the new model by executing the following command in a new terminal (Linux) or Miniforge Prompt (Windows):

```bash
ollama run codeqwen:latest-continue
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,10 +153,10 @@ sudo dpkg -i *.deb

### Setup Python Environment

Download and install the Miniconda as follows if you don't have conda installed on your machine:
Download and install the Miniforge as follows if you don't have conda installed on your machine:
```bash
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
bash Miniforge3-Linux-x86_64.sh
source ~/.bashrc
```

Expand Down Expand Up @@ -259,7 +259,7 @@ To use GPU acceleration on Linux, several environment variables are required or

Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface.co/microsoft/phi-1_5) model, a 1.3 billion parameter LLM for this demostration. Follow the steps below to setup and run the model, and observe how it responds to a prompt "What is AI?".

* Step 1: Open the **Anaconda Prompt** and activate the Python environment `llm` you previously created:
* Step 1: Activate the Python environment `llm` you previously created:
```bash
conda activate llm
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ Download and install the latest GPU driver from the [official Intel download pag

### Setup Python Environment

Visit [Miniconda installation page](https://docs.anaconda.com/free/miniconda/), download the **Miniconda installer for Windows**, and follow the instructions to complete the installation.
Visit [Miniforge installation page](https://conda-forge.org/download/), download the **Miniforge installer for Windows**, and follow the instructions to complete the installation.

<div align="center">
<img src="https://llm-assets.readthedocs.io/en/latest/_images/quickstart_windows_gpu_5.png" width=70%/>
<img src="https://llm-assets.readthedocs.io/en/latest/_images/quickstart_windows_gpu_miniforge_download.png" width=80%/>
</div>

After installation, open the **Anaconda Prompt**, create a new python environment `llm`:
After installation, open the **Miniforge Prompt**, create a new python environment `llm`:
```cmd
conda create -n llm python=3.11 libuv
```
Expand Down Expand Up @@ -83,7 +83,7 @@ With the `llm` environment active, use `pip` to install `ipex-llm` for GPU. Choo
You can verify if `ipex-llm` is successfully installed following below steps.

### Step 1: Runtime Configurations
* Open the **Anaconda Prompt** and activate the Python environment `llm` you previously created:
* Open the **Miniforge Prompt** and activate the Python environment `llm` you previously created:
```cmd
conda activate llm
```
Expand Down Expand Up @@ -117,9 +117,9 @@ You can verify if `ipex-llm` is successfully installed following below steps.

### Step 2: Run Python Code

* Launch the Python interactive shell by typing `python` in the Anaconda prompt window and then press Enter.
* Launch the Python interactive shell by typing `python` in the Miniforge Prompt window and then press Enter.

* Copy following code to Anaconda prompt **line by line** and press Enter **after copying each line**.
* Copy following code to Miniforge Prompt **line by line** and press Enter **after copying each line**.
```python
import torch
from ipex_llm.transformers import AutoModel,AutoModelForCausalLM
Expand Down Expand Up @@ -211,7 +211,7 @@ Now let's play with a real LLM. We'll be using the [Qwen-1.8B-Chat](https://hugg
.. tab:: ModelScope
Please first run following command in Anaconda Prompt to install ModelScope:
Please first run following command in Miniforge Prompt to install ModelScope:
.. code-block:: cmd
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Under your current directory, exceuting below command to do inference with Llama
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand All @@ -94,7 +94,7 @@ Under your current directory, you can also execute below command to have interac
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down Expand Up @@ -138,7 +138,7 @@ Launch the Ollama service:
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down Expand Up @@ -183,7 +183,7 @@ Keep the Ollama service on and open another terminal and run llama3 with `ollama
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ To use `llama.cpp` with IPEX-LLM, first ensure that `ipex-llm[cpp]` is installed
.. note::
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: cmd
Expand Down Expand Up @@ -86,7 +86,7 @@ Then you can use following command to initialize `llama.cpp` with IPEX-LLM:
.. tab:: Windows
Please run the following command with **administrator privilege in Anaconda Prompt**.
Please run the following command with **administrator privilege in Miniforge Prompt**.
.. code-block:: bash
Expand Down Expand Up @@ -127,7 +127,7 @@ To use GPU acceleration, several environment variables are required or recommend
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down Expand Up @@ -169,7 +169,7 @@ Before running, you should download or copy community GGUF model to your current
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Activate the `llm-cpp` conda environment and initialize Ollama by executing the
.. tab:: Windows
Please run the following command with **administrator privilege in Anaconda Prompt**.
Please run the following command with **administrator privilege in Miniforge Prompt**.
.. code-block:: bash
Expand Down Expand Up @@ -76,7 +76,7 @@ You may launch the Ollama service as below:
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down Expand Up @@ -149,7 +149,7 @@ model**, e.g. `dolphin-phi`.
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down Expand Up @@ -187,7 +187,7 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile`
.. tab:: Windows
Please run the following command in Anaconda Prompt.
Please run the following command in Miniforge Prompt.
.. code-block:: bash
Expand Down
10 changes: 5 additions & 5 deletions docs/readthedocs/source/doc/LLM/Quickstart/webui_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Download the `text-generation-webui` with IPEX-LLM integrations from [this link]

#### Install Dependencies

Open **Anaconda Prompt** and activate the conda environment you have created in [section 1](#1-install-ipex-llm), e.g., `llm`.
Open **Miniforge Prompt** and activate the conda environment you have created in [section 1](#1-install-ipex-llm), e.g., `llm`.
```
conda activate llm
```
Expand All @@ -50,7 +50,7 @@ pip install -r extensions/openai/requirements.txt
### 3 Start the WebUI Server

#### Set Environment Variables
Configure oneAPI variables by running the following command in **Anaconda Prompt**:
Configure oneAPI variables by running the following command in **Miniforge Prompt**:

```eval_rst
.. note::
Expand All @@ -67,7 +67,7 @@ set BIGDL_LLM_XMX_DISABLED=1
```

#### Launch the Server
In **Anaconda Prompt** with the conda environment `llm` activated, navigate to the `text-generation-webui` folder and execute the following commands (You can optionally lanch the server with or without the API service):
In **Miniforge Prompt** with the conda environment `llm` activated, navigate to the `text-generation-webui` folder and execute the following commands (You can optionally lanch the server with or without the API service):

##### without API service
```cmd
Expand Down Expand Up @@ -154,7 +154,7 @@ Enter prompts into the textbox at the bottom and press the **Generate** button t

#### Exit the WebUI

To shut down the WebUI server, use **Ctrl+C** in the **Anaconda Prompt** terminal where the WebUI Server is runing, then close your browser tab.
To shut down the WebUI server, use **Ctrl+C** in the **Miniforge Prompt** terminal where the WebUI Server is runing, then close your browser tab.


### 5. Advanced Usage
Expand Down Expand Up @@ -203,7 +203,7 @@ The first response to user prompt might be slower than expected, with delays of

During model loading, you may encounter an **ImportError** like `ImportError: This modeling file requires the following packages that were not found in your environment`. This indicates certain packages required by the model are absent from your environment. Detailed instructions for installing these necessary packages can be found at the bottom of the error messages. Take the following steps to fix these errors:

- Exit the WebUI Server by pressing **Ctrl+C** in the **Anaconda Prompt** terminal.
- Exit the WebUI Server by pressing **Ctrl+C** in the **Miniforge Prompt** terminal.
- Install the missing pip packages as specified in the error message
- Restart the WebUI Server.

Expand Down

0 comments on commit f0aaa13

Please sign in to comment.