From 7f62b418a4c605257183c3bbf1f7b98f0904fe5f Mon Sep 17 00:00:00 2001 From: Ettore Di Giacinto <mudler@localai.io> Date: Wed, 29 Jan 2025 15:16:07 +0100 Subject: [PATCH] chore(docs): add documentation for l4t images Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --- .../docs/getting-started/container-images.md | 14 +++++++++++++- docs/content/docs/reference/nvidia-l4t.md | 10 ++++++++-- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/docs/content/docs/getting-started/container-images.md b/docs/content/docs/getting-started/container-images.md index 64f6dbc93165..a6a955adbcf0 100644 --- a/docs/content/docs/getting-started/container-images.md +++ b/docs/content/docs/getting-started/container-images.md @@ -154,7 +154,7 @@ Images are available with and without python dependencies. Note that images with Images with `core` in the tag are smaller and do not contain any python dependencies. -{{< tabs tabTotal="7" >}} +{{< tabs tabTotal="8" >}} {{% tab tabName="Vanilla / CPU Images" %}} | Description | Quay | Docker Hub | @@ -236,6 +236,18 @@ Images with `core` in the tag are smaller and do not contain any python dependen | Versioned image including FFMpeg, no python | `quay.io/go-skynet/local-ai:{{< version >}}-vulkan-fmpeg-core` | `localai/localai:{{< version >}}-vulkan-fmpeg-core` | {{% /tab %}} +{{% tab tabName="Nvidia Linux for tegra" %}} + +These images are compatible with Nvidia ARM64 devices, such as the Jetson Nano, Jetson Xavier NX, and Jetson AGX Xavier. For more information, see the [Nvidia L4T guide]({{%relref "docs/reference/nvidia-l4t" %}}). + +| Description | Quay | Docker Hub | +| --- | --- |-------------------------------------------------------------| +| Latest images from the branch (development) | `quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64-core` | `localai/localai:master-nvidia-l4t-arm64-core` | +| Latest tag | `quay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64-core` | `localai/localai:latest-nvidia-l4t-arm64-core` | +| Versioned image | `quay.io/go-skynet/local-ai:{{< version >}}-nvidia-l4t-arm64-core` | `localai/localai:{{< version >}}-nvidia-l4t-arm64-core` | + +{{% /tab %}} + {{< /tabs >}} ## See Also diff --git a/docs/content/docs/reference/nvidia-l4t.md b/docs/content/docs/reference/nvidia-l4t.md index 028ee5318fef..ce0fd5e95c6f 100644 --- a/docs/content/docs/reference/nvidia-l4t.md +++ b/docs/content/docs/reference/nvidia-l4t.md @@ -21,7 +21,13 @@ git clone https://github.com/mudler/LocalAI cd LocalAI -docker build --build-arg SKIP_DRIVERS=true --build-arg BUILD_TYPE=cublas --build-arg BASE_IMAGE=nvcr.io/nvidia/l4t-jetpack:r36.4.0 --build-arg IMAGE_TYPE=core -t localai-orin . +docker build --build-arg SKIP_DRIVERS=true --build-arg BUILD_TYPE=cublas --build-arg BASE_IMAGE=nvcr.io/nvidia/l4t-jetpack:r36.4.0 --build-arg IMAGE_TYPE=core -t quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64-core . +``` + +Otherwise images are available on quay.io and dockerhub: + +```bash +docker pull quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64-core ``` ## Usage @@ -29,7 +35,7 @@ docker build --build-arg SKIP_DRIVERS=true --build-arg BUILD_TYPE=cublas --build Run the LocalAI container on Nvidia ARM64 devices using the following command, where `/data/models` is the directory containing the models: ```bash -docker run -e DEBUG=true -p 8080:8080 -v /data/models:/build/models -ti --restart=always --name local-ai --runtime nvidia --gpus all localai-orin +docker run -e DEBUG=true -p 8080:8080 -v /data/models:/build/models -ti --restart=always --name local-ai --runtime nvidia --gpus all quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64-core ``` Note: `/data/models` is the directory containing the models. You can replace it with the directory containing your models.