We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 29b6e9a commit b14dd98Copy full SHA for b14dd98
CHANGELOG.md
@@ -7,6 +7,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
8
## [Unreleased]
9
10
+## [0.2.68]
11
+
12
+- feat: Update llama.cpp to ggerganov/llama.cpp@
13
+- feat: Add option to enable flash_attn to Lllama params and ModelSettings by @abetlen in 22d77eefd2edaf0148f53374d0cac74d0e25d06e
14
+- fix(ci): Fix build-and-release.yaml by @Smartappli in #1413
15
16
## [0.2.67]
17
18
- fix: Ensure image renders before text in chat formats regardless of message content order by @abetlen in 3489ef09d3775f4a87fb7114f619e8ba9cb6b656
llama_cpp/__init__.py
@@ -1,4 +1,4 @@
1
from .llama_cpp import *
2
from .llama import *
3
4
-__version__ = "0.2.67"
+__version__ = "0.2.68"
0 commit comments