Skip to content

Commit

Permalink
feat: support key shortcuts for AutoFP8 window
Browse files Browse the repository at this point in the history
- update README.md for v1.8.1
- remove aliased quant types
- update .env.example with all configuration parameters
  • Loading branch information
leafspark committed Sep 5, 2024
1 parent d55cb9e commit 24ae006
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 9 deletions.
3 changes: 3 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,6 @@ AUTOGGUF_SERVER_API_KEY=
AUTOGGUF_MODEL_DIR_NAME=models
AUTOGGUF_OUTPUT_DIR_NAME=quantized_models
AUTOGGUF_RESIZE_FACTOR=1.1
AUTOGGUF_SERVER=enabled
AUTOGGUF_SERVER_PORT=7001
AUTOGGUF_SERVER_API_KEY=
16 changes: 10 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ AutoGGUF provides a graphical user interface for quantizing GGUF models using th
- Parallel quantization + imatrix generation
- LoRA conversion and merging
- Preset saving and loading
- AutoFP8 quantization

## Usage

Expand All @@ -49,6 +50,8 @@ AutoGGUF provides a graphical user interface for quantizing GGUF models using th
```
or use the `run.bat` script.

macOS and Ubuntu builds are provided with GitHub Actions, you may download the binaries in the releases section.

### Windows
Standard builds:
1. Download the latest release
Expand All @@ -62,6 +65,8 @@ Setup builds:
4. The .GGUF extension will be registered with the program automatically
5. Run the program from the Start Menu or desktop shortcuts

After launching the program, you may access its local server at port 7001 (set `AUTOGGUF_SERVER` to "enabled" first)

### Verifying Releases

#### Linux/macOS:
Expand All @@ -77,11 +82,11 @@ sha256sum -c AutoGGUF-v1.5.0-prerel.sha256
gpg --import AutoGGUF-v1.5.0-prerel.asc
# Verify the signature
gpg --verify AutoGGUF-v1.5.0-Windows-avx2-prerel.zip.sig AutoGGUF-v1.5.0-Windows-avx2-prerel.zip
gpg --verify AutoGGUF-v1.8.1-Windows-avx2.zip.sig AutoGGUF-v1.8.1-Windows-avx2.zip
# Check SHA256
$fileHash = (Get-FileHash -Algorithm SHA256 AutoGGUF-v1.5.0-Windows-avx2-prerel.zip).Hash.ToLower()
$storedHash = (Get-Content AutoGGUF-v1.5.0-prerel.sha256 | Select-String AutoGGUF-v1.5.0-Windows-avx2-prerel.zip).Line.Split()[0]
$fileHash = (Get-FileHash -Algorithm SHA256 AutoGGUF-v1.8.1-Windows-avx2.zip).Hash.ToLower()
$storedHash = (Get-Content AutoGGUF-v1.8.1.sha256 | Select-String AutoGGUF-v1.8.1-Windows-avx2.zip).Line.Split()[0]
if ($fileHash -eq $storedHash) { "SHA256 Match" } else { "SHA256 Mismatch" }
```

Expand Down Expand Up @@ -118,7 +123,7 @@ View the list of supported languages at [AutoGGUF/wiki/Installation#configuratio

To use a specific language, set the `AUTOGGUF_LANGUAGE` environment variable to one of the listed language codes (note: some languages may not be fully supported yet, those will fall back to English).

## Known Issues
## Issues

- None!

Expand All @@ -127,9 +132,8 @@ To use a specific language, set the `AUTOGGUF_LANGUAGE` environment variable to
- Time estimation for quantization
- Actual progress bar tracking
- Perplexity testing
- Web API and management (partially implemented in v1.6.2)
- HuggingFace upload/download (coming in the next release)
- AutoFP8 quantization and bitsandbytes (coming in the next release)
- AutoFP8 quantization (partially done) and bitsandbytes (coming soon)

## Troubleshooting

Expand Down
4 changes: 1 addition & 3 deletions src/AutoGGUF.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,7 @@ def __init__(self, args: List[str]) -> None:
# Tools menu
tools_menu = self.menubar.addMenu("&Tools")
autofp8_action = QAction("&AutoFP8", self)
autofp8_action.setShortcut(QKeySequence("Shift+Q"))
autofp8_action.triggered.connect(self.show_autofp8_window)
tools_menu.addAction(autofp8_action)

Expand Down Expand Up @@ -321,17 +322,14 @@ def __init__(self, args: List[str]) -> None:
"IQ3_XXS",
"IQ3_S",
"IQ3_M",
"Q3_K",
"IQ3_XS",
"Q3_K_S",
"Q3_K_M",
"Q3_K_L",
"IQ4_NL",
"IQ4_XS",
"Q4_K",
"Q4_K_S",
"Q4_K_M",
"Q5_K",
"Q5_K_S",
"Q5_K_M",
"Q6_K",
Expand Down

0 comments on commit 24ae006

Please sign in to comment.