You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-2Lines changed: 10 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -22,18 +22,20 @@ Windows:
22
22
2. Enjoy!
23
23
24
24
**Building**:
25
-
```
25
+
```bash
26
26
cd src
27
27
pip install -U pyinstaller
28
28
pyinstaller main.py
29
29
cd dist/main
30
-
main
30
+
./main
31
31
```
32
32
33
33
**Dependencies**:
34
34
- PyQt6
35
35
- requests
36
36
- psutil
37
+
- shutil
38
+
- OpenSSL
37
39
38
40
**Localizations:**
39
41
@@ -75,9 +77,15 @@ In order to use them, please set the `AUTOGGUF_LANGUAGE` enviroment variable to
75
77
**Issues:**
76
78
- Actual progress bar tracking
77
79
- Download safetensors from HF and convert to unquanted GGUF
80
+
- Perplexity testing
81
+
- Cannot disable llama.cpp update check on startup
82
+
-`_internal` directory required, will see if I can package this into a single exe on the next release
83
+
-~~Custom command line parameters~~ (added in v1.3.0)
84
+
-~~More iMatrix generation parameters~~ (added in v1.3.0)
78
85
-~~Specify multiple KV overrides~~ (added in v1.1.0)
79
86
-~~Better error handling~~ (added in v1.1.0)
80
87
-~~Cannot select output/token embd type~~ (fixed in v1.1.0)
88
+
-~~Importing presets with KV overrides causes UI thread crash~~ (fixed in v1.3.0)
81
89
82
90
**Troubleshooting:**
83
91
-~~llama.cpp quantizations errors out with an iostream error: create the `quantized_models` directory (or set a directory)~~ (fixed in v1.2.1, automatically created on launch)
0 commit comments