Skip to content

Commit 23db0b2

Browse files
committed
Prep for 3.4.3 release
1 parent d1b3522 commit 23db0b2

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

.github/workflows/python-package.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ jobs:
1616
strategy:
1717
matrix:
1818
os: [ubuntu-latest]
19-
python-version: [ "3.7", "3.8", "3.9", "3.10", "3.11", "pypy-3.7", "pypy-3.8", "pypy-3.9"]
19+
python-version: [ "3.8", "3.9", "3.10", "3.11", "pypy-3.9", "pypy-3.10"]
2020

2121
steps:
2222
- uses: actions/checkout@v3
@@ -28,10 +28,10 @@ jobs:
2828
run: |
2929
python -m pip install --upgrade pip wheel setuptools
3030
python -m pip install -e .
31-
- name: Type check with mypy (only on Python 3.7)
31+
- name: Type check with mypy (only on Python 3.8)
3232
run: |
33-
if [ "${{ matrix.python-version }}" == "3.7" ]; then python -m pip install mypy; fi
34-
if [ "${{ matrix.python-version }}" == "3.7" ]; then mypy --python-version=3.7 src/tokenizer; fi
33+
if [ "${{ matrix.python-version }}" == "3.8" ]; then python -m pip install mypy; fi
34+
if [ "${{ matrix.python-version }}" == "3.8" ]; then mypy --python-version=3.8 src/tokenizer; fi
3535
- name: Test with pytest
3636
run: |
3737
python -m pip install pytest

README.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Tokenization is a necessary first step in many natural language processing
1212
tasks, such as word counting, parsing, spell checking, corpus generation, and
1313
statistical analysis of text.
1414

15-
**Tokenizer** is a compact pure-Python (>= 3.7) executable
15+
**Tokenizer** is a compact pure-Python (>= 3.8) executable
1616
program and module for tokenizing Icelandic text. It converts input text to
1717
streams of *tokens*, where each token is a separate word, punctuation sign,
1818
number/amount, date, e-mail, URL/URI, etc. It also segments the token stream
@@ -809,6 +809,7 @@ can be found in the file ``test/toktest_normal_gold_expected.txt``.
809809
Changelog
810810
---------
811811

812+
* Version 3.4.3: Various minor fixes. Now requires Python 3.8 or later.
812813
* Version 3.4.2: Abbreviations and phrases added, ``META_BEGIN`` token added.
813814
* Version 3.4.1: Improved performance on long input chunks.
814815
* Version 3.4.0: Improved handling and normalization of punctuation.

setup.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,6 @@ def read(*names: str, **kwargs: Any) -> str:
8383
"Natural Language :: Icelandic",
8484
"Programming Language :: Python",
8585
"Programming Language :: Python :: 3",
86-
"Programming Language :: Python :: 3.7",
8786
"Programming Language :: Python :: 3.8",
8887
"Programming Language :: Python :: 3.9",
8988
"Programming Language :: Python :: 3.10",

src/tokenizer/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "3.4.2"
1+
__version__ = "3.4.3"

0 commit comments

Comments
 (0)