diff --git a/README.md b/README.md index c1d4181..5d050eb 100644 --- a/README.md +++ b/README.md @@ -2,14 +2,26 @@ # AutoGGUF - automated GGUF model quantizer + +[![GitHub release](https://img.shields.io/github/release/leafspark/AutoGGUF.svg)](https://github.com/leafspark/AutoGGUF/releases) +[![GitHub last commit](https://img.shields.io/github/last-commit/leafspark/AutoGGUF.svg)](https://github.com/leafspark/AutoGGUF/commits) +[![CI/CD Status](https://img.shields.io/badge/CI%2FCD-passing-brightgreen)]() + + [![Powered by llama.cpp](https://img.shields.io/badge/Powered%20by-llama.cpp-green.svg)](https://github.com/ggerganov/llama.cpp) -![GitHub release](https://img.shields.io/github/release/leafspark/AutoGGUF.svg) -![GitHub last commit](https://img.shields.io/github/last-commit/leafspark/AutoGGUF.svg) +![GitHub top language](https://img.shields.io/github/languages/top/leafspark/AutoGGUF.svg) +[![Platform Compatibility](https://img.shields.io/badge/platform-Linux%20%7C%20macOS%20%7C%20Windows-blue)]() +[![GitHub license](https://img.shields.io/github/license/leafspark/AutoGGUF.svg)](https://github.com/leafspark/AutoGGUF/blob/main/LICENSE) + + ![GitHub stars](https://img.shields.io/github/stars/leafspark/AutoGGUF.svg) ![GitHub forks](https://img.shields.io/github/forks/leafspark/AutoGGUF.svg) -![GitHub top language](https://img.shields.io/github/languages/top/leafspark/AutoGGUF.svg) ![GitHub repo size](https://img.shields.io/github/repo-size/leafspark/AutoGGUF.svg) -![GitHub license](https://img.shields.io/github/license/leafspark/AutoGGUF.svg) + + +[![Code Style: Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) +[![Issues](https://img.shields.io/github/issues/leafspark/AutoGGUF)](https://github.com/leafspark/AutoGGUF/issues) +[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/leafspark/AutoGGUF/pulls) AutoGGUF provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options. @@ -55,17 +67,19 @@ cd dist/main ### Windows ```bash -build RELEASE/DEV +build RELEASE | DEV ``` Find the executable in `build//dist/AutoGGUF.exe`. ## Dependencies - PyQt6 -- requests - psutil - shutil -- OpenSSL +- numpy +- torch +- safetensors +- gguf (bundled) ## Localizations @@ -77,7 +91,7 @@ To use a specific language, set the `AUTOGGUF_LANGUAGE` environment variable to - Saving preset while quantizing causes UI thread crash (planned fix: remove this feature) - Cannot delete task while processing (planned fix: disallow deletion before cancelling or cancel automatically) -- Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout) +- ~~Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)~~ (fixed in v1.4.2) ## Planned Features @@ -95,7 +109,7 @@ To use a specific language, set the `AUTOGGUF_LANGUAGE` environment variable to ## Contributing -Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. +Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. Read `CONTRIBUTING.md` for more information. ## User Interface