You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AutoGGUF provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options.
15
27
@@ -55,17 +67,19 @@ cd dist/main
55
67
56
68
### Windows
57
69
```bash
58
-
build RELEASE/DEV
70
+
build RELEASE|DEV
59
71
```
60
72
Find the executable in `build/<type>/dist/AutoGGUF.exe`.
61
73
62
74
## Dependencies
63
75
64
76
- PyQt6
65
-
- requests
66
77
- psutil
67
78
- shutil
68
-
- OpenSSL
79
+
- numpy
80
+
- torch
81
+
- safetensors
82
+
- gguf (bundled)
69
83
70
84
## Localizations
71
85
@@ -77,7 +91,7 @@ To use a specific language, set the `AUTOGGUF_LANGUAGE` environment variable to
77
91
78
92
- Saving preset while quantizing causes UI thread crash (planned fix: remove this feature)
79
93
- Cannot delete task while processing (planned fix: disallow deletion before cancelling or cancel automatically)
80
-
- Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)
94
+
-~~Base Model text still shows when GGML is selected as LoRA type (fix: include text in show/hide Qt layout)~~ (fixed in v1.4.2)
81
95
82
96
## Planned Features
83
97
@@ -95,7 +109,7 @@ To use a specific language, set the `AUTOGGUF_LANGUAGE` environment variable to
95
109
96
110
## Contributing
97
111
98
-
Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description.
112
+
Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. Read `CONTRIBUTING.md` for more information.
0 commit comments