Skip to content

Commit

Permalink
Deployed 6903076 to 0.4 with MkDocs 1.6.0 and mike 2.1.3
Browse files Browse the repository at this point in the history
  • Loading branch information
gitlawr committed Dec 17, 2024
1 parent 6903076 commit bd4d6a5
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 18 deletions.
2 changes: 1 addition & 1 deletion 0.4/search/search_index.json

Large diffs are not rendered by default.

30 changes: 15 additions & 15 deletions 0.4/user-guide/image-generation-apis/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1813,21 +1813,21 @@ <h2 id="supported-models">Supported Models</h2>
<p>Please use the converted GGUF models provided by GPUStack. Check the model link for more details.</p>
</div>
<ul>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v3-5-large-turbo-GGUF">stabilityai/stable-diffusion-3.5-large-turbo</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v3-5-large-GGUF">stabilityai/stable-diffusion-3.5-large</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v3-5-medium-GGUF">stabilityai/stable-diffusion-3.5-medium</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v3-medium-GGUF">stabilityai/stable-diffusion-3-medium</a></li>
<li><a href="https://huggingface.co/gpustack/FLUX.1-mini-GGUF">TencentARC/FLUX.1-mini</a></li>
<li><a href="https://huggingface.co/gpustack/FLUX.1-lite-GGUF">Freepik/FLUX.1-lite</a></li>
<li><a href="https://huggingface.co/gpustack/FLUX.1-dev-GGUF">black-forest-labs/FLUX.1-dev</a></li>
<li><a href="https://huggingface.co/gpustack/FLUX.1-schnell-GGUF">black-forest-labs/FLUX.1-schnell</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-xl-1.0-turbo-GGUF">stabilityai/sdxl-turbo</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-xl-refiner-1.0-GGUF">stabilityai/stable-diffusion-xl-refiner-1.0</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-xl-base-1.0-GGUF">stabilityai/stable-diffusion-xl-base-1.0</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v2-1-turbo-GGUF">stabilityai/sd-turbo</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v2-1-GGUF">stabilityai/stable-diffusion-2-1</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v1-5-GGUF">stable-diffusion-v1-5/stable-diffusion-v1-5</a></li>
<li><a href="https://huggingface.co/gpustack/stable-diffusion-v1-4-GGUF">CompVis/stable-diffusion-v1-4</a></li>
<li>stabilityai/stable-diffusion-3.5-large-turbo <a href="https://huggingface.co/gpustack/stable-diffusion-v3-5-large-turbo-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v3-5-large-turbo-GGUF">[ModelScope]</a></li>
<li>stabilityai/stable-diffusion-3.5-large <a href="https://huggingface.co/gpustack/stable-diffusion-v3-5-large-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v3-5-large-GGUF">[ModelScope]</a></li>
<li>stabilityai/stable-diffusion-3.5-medium <a href="https://huggingface.co/gpustack/stable-diffusion-v3-5-medium-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v3-5-medium-GGUF">[ModelScope]</a></li>
<li>stabilityai/stable-diffusion-3-medium <a href="https://huggingface.co/gpustack/stable-diffusion-v3-medium-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v3-medium-GGUF">[ModelScope]</a></li>
<li>TencentARC/FLUX.1-mini <a href="https://huggingface.co/gpustack/FLUX.1-mini-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/FLUX.1-mini-GGUF">[ModelScope]</a></li>
<li>Freepik/FLUX.1-lite <a href="https://huggingface.co/gpustack/FLUX.1-lite-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/FLUX.1-lite-GGUF">[ModelScope]</a></li>
<li>black-forest-labs/FLUX.1-dev <a href="https://huggingface.co/gpustack/FLUX.1-dev-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/FLUX.1-dev-GGUF">[ModelScope]</a></li>
<li>black-forest-labs/FLUX.1-schnell <a href="https://huggingface.co/gpustack/FLUX.1-schnell-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/FLUX.1-schnell-GGUF">[ModelScope]</a></li>
<li>stabilityai/sdxl-turbo <a href="https://huggingface.co/gpustack/stable-diffusion-xl-1.0-turbo-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-xl-1.0-turbo-GGUF">[ModelScope]</a></li>
<li>stabilityai/stable-diffusion-xl-refiner-1.0 <a href="https://huggingface.co/gpustack/stable-diffusion-xl-refiner-1.0-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-xl-refiner-1.0-GGUF">[ModelScope]</a></li>
<li>stabilityai/stable-diffusion-xl-base-1.0 <a href="https://huggingface.co/gpustack/stable-diffusion-xl-base-1.0-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-xl-base-1.0-GGUF">[ModelScope]</a></li>
<li>stabilityai/sd-turbo <a href="https://huggingface.co/gpustack/stable-diffusion-v2-1-turbo-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v2-1-turbo-GGUF">[ModelScope]</a></li>
<li>stabilityai/stable-diffusion-2-1 <a href="https://huggingface.co/gpustack/stable-diffusion-v2-1-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v2-1-GGUF">[ModelScope]</a></li>
<li>stable-diffusion-v1-5/stable-diffusion-v1-5 <a href="https://huggingface.co/gpustack/stable-diffusion-v1-5-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v1-5-GGUF">[ModelScope]</a></li>
<li>CompVis/stable-diffusion-v1-4 <a href="https://huggingface.co/gpustack/stable-diffusion-v1-4-GGUF">[Hugging Face]</a>, <a href="https://modelscope.cn/models/gpustack/stable-diffusion-v1-4-GGUF">[ModelScope]</a></li>
</ul>
<h2 id="api-details">API Details</h2>
<p>The image generation APIs adhere to OpenAI API specification. While OpenAI APIs for image generation are simple and opinionated, GPUStack extends these capabilities with additional features.</p>
Expand Down
4 changes: 2 additions & 2 deletions 0.4/user-guide/inference-backends/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1970,8 +1970,8 @@ <h3 id="supported-platforms">Supported Platforms</h3>
<h3 id="supported-models">Supported Models</h3>
<ul>
<li>LLMs: For supported LLMs, refer to the llama.cpp <a href="https://github.com/ggerganov/llama.cpp#description">README</a>.</li>
<li>Difussion Models: Supported models are listed in this <a href="https://huggingface.co/collections/gpustack/image-672dafeb2fa0d02dbe2539a9">Hugging Face collection</a>.</li>
<li>Reranker Models: Supported models can be found in this <a href="https://huggingface.co/collections/gpustack/reranker-6721a234527f6fcd90deedc4">Hugging Face collection</a>.</li>
<li>Diffussion Models: Supported models are listed in this <a href="https://huggingface.co/collections/gpustack/image-672dafeb2fa0d02dbe2539a9">Hugging Face collection</a> or this <a href="https://modelscope.cn/collections/Image-fab3d241f8a641">ModelScope collection</a>.</li>
<li>Reranker Models: Supported models can be found in this <a href="https://huggingface.co/collections/gpustack/reranker-6721a234527f6fcd90deedc4">Hugging Face collection</a> or this <a href="https://modelscope.cn/collections/Reranker-7576210e79de4a">ModelScope collection</a>.</li>
</ul>
<h3 id="supported-features">Supported Features</h3>
<h4 id="allow-cpu-offloading">Allow CPU Offloading</h4>
Expand Down

0 comments on commit bd4d6a5

Please sign in to comment.