Skip to content

Commit 9ce5cb3

Browse files
committed
chore: Bump version
1 parent 4a7122d commit 9ce5cb3

File tree

2 files changed

+16
-1
lines changed

2 files changed

+16
-1
lines changed

CHANGELOG.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.70]
11+
12+
- feat: Update llama.cpp to ggerganov/llama.cpp@
13+
- feat: fill-in-middle support by @CISC in #1386
14+
- fix: adding missing args in create_completion for functionary chat handler by @skalade in #1430
15+
- docs: update README.md @eltociear in #1432
16+
- fix: chat_format log where auto-detected format prints None by @balvisio in #1434
17+
- feat(server): Add support for setting root_path by @abetlen in 0318702cdc860999ee70f277425edbbfe0e60419
18+
- feat(ci): Add docker checks and check deps more frequently by @Smartappli in #1426
19+
- fix: detokenization case where first token does not start with a leading space by @noamgat in #1375
20+
- feat: Implement streaming for Functionary v2 + Bug fixes by @jeffrey-fong in #1419
21+
- fix: Use memmove to copy str_value kv_override by @abetlen in 9f7a85571ae80d3b6ddbd3e1bae407b9f1e3448a
22+
- feat(server): Remove temperature bounds checks for server by @abetlen in 0a454bebe67d12a446981eb16028c168ca5faa81
23+
- fix(server): Propagate flash_attn to model load by @dthuerck in #1424
24+
1025
## [0.2.69]
1126

1227
- feat: Update llama.cpp to ggerganov/llama.cpp@6ecf3189e00a1e8e737a78b6d10e1d7006e050a2

llama_cpp/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.69"
4+
__version__ = "0.2.70"

0 commit comments

Comments
 (0)