Skip to content

Commit 7d6bcad

Browse files
authored
Merge pull request #2302 from opentensor/backmerge/release-7.4.0
Backmerge/release 7.4.0
2 parents c7b4739 + 7a5fc76 commit 7d6bcad

File tree

7 files changed

+158
-88
lines changed

7 files changed

+158
-88
lines changed

CHANGELOG.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,83 @@
11
# Changelog
22

3+
## 7.4.0 /2024-08-29
4+
5+
## What's Changed
6+
* [Fix] Allow unstake below network min by @camfairchild in https://github.com/opentensor/bittensor/pull/2016
7+
* Tests/e2e tests staging by @open-junius in https://github.com/opentensor/bittensor/pull/1943
8+
* Chore: Backmerge 7.2 by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2020
9+
* Fix broken tests and Enforce BTCLI usage by @opendansor in https://github.com/opentensor/bittensor/pull/2027
10+
* Add time delay to faucet by @opendansor in https://github.com/opentensor/bittensor/pull/2030
11+
* Skip faucet test by @opendansor in https://github.com/opentensor/bittensor/pull/2031
12+
* Adds normalization for alpha hyperparams by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2035
13+
* Revert info logging in processing response by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2043
14+
* Pin numpy version to 1.26.4 in prod.txt by @rajkaramchedu in https://github.com/opentensor/bittensor/pull/2045
15+
* Test hot key Swap by @opendansor in https://github.com/opentensor/bittensor/pull/2044
16+
* Do not run Circle-CI on drafts by @thewhaleking in https://github.com/opentensor/bittensor/pull/1959
17+
* Enhancement: Detailed nonce information in-case of failures by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2050
18+
* fix bittensor not installing under Python 3.13 by @mjurbanski-reef in https://github.com/opentensor/bittensor/pull/2053
19+
* Enable Faucet Test by @opendansor in https://github.com/opentensor/bittensor/pull/2056
20+
* Add back BT_SUBTENSOR_CHAIN_ENDPOINT env variable by @bradleytf in https://github.com/opentensor/bittensor/pull/2034
21+
* Fix: Logging configs not being set by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2065
22+
* Feature/gus/liquid alpha params by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2012
23+
* Test Emissions E2E by @opendansor in https://github.com/opentensor/bittensor/pull/2036
24+
* Prevent e2e draft by @opendansor in https://github.com/opentensor/bittensor/pull/2072
25+
* Fix e2e to only run when PR is ready for review by @opendansor in https://github.com/opentensor/bittensor/pull/2077
26+
* Fix Faucet and fastblocks interaction by @opendansor in https://github.com/opentensor/bittensor/pull/2083
27+
* Float normalization for child hotkeys by @opendansor in https://github.com/opentensor/bittensor/pull/2093
28+
* Fix e2e test hanging by @open-junius in https://github.com/opentensor/bittensor/pull/2118
29+
* Fixes leaked semaphores by @thewhaleking in https://github.com/opentensor/bittensor/pull/2125
30+
* Backmerge master -> staging by @roman-opentensor in https://github.com/opentensor/bittensor/pull/2136
31+
* fix: coldkeypub usage instead of coldkey for arbitration_stats by @Rapiiidooo in https://github.com/opentensor/bittensor/pull/2132
32+
* Removes extra no_prompts in commands by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2140
33+
* Adds timeout for e2e tests by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2141
34+
* fix: updates test_axon verify body async tests by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2142
35+
* test: fix mocksubtensor query previous blocks by @timabilov in https://github.com/opentensor/bittensor/pull/2139
36+
* Adds E2E for Metagraph command by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2143
37+
* feat: Enhance dendrite error messaging by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2117
38+
* Adds E2E Tests for wallet creation commands by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2145
39+
* [Ledger Integration] [Feature] bump pysub to 1.7.9+ by @camfairchild in https://github.com/opentensor/bittensor/pull/2156
40+
* Ruff complains about an extra line by @roman-opentensor in https://github.com/opentensor/bittensor/pull/2158
41+
* support Wallet names with hyphens when passing password through ENV vars by @mjurbanski-reef in https://github.com/opentensor/bittensor/pull/1949
42+
* Fix naming convention of swap hotkey test by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2162
43+
* Adds E2E test for wallet regenerations + fixes input bug for regen hotkey by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2149
44+
* Backmerge Master -> Staging (7.4) by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2170
45+
* ci: auto assigns cortex to opened PRs by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2184
46+
* CI/E2E test improvements by @mvds00 in https://github.com/opentensor/bittensor/pull/2168
47+
* Fix multiprocessing POW errors and No Torch logging errors by @thewhaleking in https://github.com/opentensor/bittensor/pull/2186
48+
* ci: update reviewers by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2189
49+
* Adds updated type in timeouts dendrite by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2196
50+
* Bumps setuptools ~=70.0.0 by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2150
51+
* Bump black from 23.7.0 to 24.3.0 in /requirements by @dependabot in https://github.com/opentensor/bittensor/pull/2197
52+
* btlogging/loggingmachine.py: Fix bw compat API. by @mvds00 in https://github.com/opentensor/bittensor/pull/2155
53+
* Check for participation before nomination call by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2193
54+
* test: subnet list e2e by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2198
55+
* ensure msg is str in _concat_msg by @thewhaleking in https://github.com/opentensor/bittensor/pull/2200
56+
* Fixes tests depending on explicit line numbers by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2211
57+
* Merge streaming fix to staging by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2183
58+
* Multiple bittensor versions e2e workflow by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2212
59+
* Changes name of workflow file by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2213
60+
* Enhances e2e tests to contain assertions & logging by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2192
61+
* Security fix: Bumps ansible and certifi by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2214
62+
* Wallet List Command e2e test by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2207
63+
* fix Synapse base performance (more than 10x speed up) by @mjurbanski-reef in https://github.com/opentensor/bittensor/pull/2161
64+
* Child Hotkeys by @opendansor in https://github.com/opentensor/bittensor/pull/2071
65+
* Improve child hotkeys QOL by @opendansor in https://github.com/opentensor/bittensor/pull/2225
66+
* Child hotkeys handle excess normalization by @opendansor in https://github.com/opentensor/bittensor/pull/2229
67+
* Fixes chain compilation timeouts by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2238
68+
* Update Child Hotkey commands by @opendansor in https://github.com/opentensor/bittensor/pull/2245
69+
* feat: return error message instead of raising exception by @gus-opentensor in https://github.com/opentensor/bittensor/pull/2244
70+
* Backmerge master to staging (7.3.1) by @ibraheem-opentensor in https://github.com/opentensor/bittensor/pull/2254
71+
72+
## New Contributors
73+
* @bradleytf made their first contribution in https://github.com/opentensor/bittensor/pull/2034
74+
* @Rapiiidooo made their first contribution in https://github.com/opentensor/bittensor/pull/2132
75+
* @timabilov made their first contribution in https://github.com/opentensor/bittensor/pull/2139
76+
* @mvds00 made their first contribution in https://github.com/opentensor/bittensor/pull/2168
77+
* @dependabot made their first contribution in https://github.com/opentensor/bittensor/pull/2197
78+
79+
**Full Changelog**: https://github.com/opentensor/bittensor/compare/v7.3.1...v7.4.0
80+
381
## 7.3.1 / 2024-08-19
482

583
## What's Changed

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
7.3.1
1+
7.4.0

bittensor/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040

4141

4242
# Bittensor code and protocol version.
43-
__version__ = "7.3.1"
43+
__version__ = "7.4.0"
4444

4545
_version_split = __version__.split(".")
4646
__version_info__ = tuple(int(part) for part in _version_split)
@@ -131,7 +131,7 @@ def debug(on: bool = True):
131131
) is not None:
132132
__local_entrypoint__ = BT_SUBTENSOR_CHAIN_ENDPOINT
133133
else:
134-
__local_entrypoint__ = "ws://127.0.0.1:9946"
134+
__local_entrypoint__ = "ws://127.0.0.1:9944"
135135

136136

137137
__tao_symbol__: str = chr(0x03C4)

bittensor/btlogging/loggingmachine.py

Lines changed: 26 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@
2929
import os
3030
import sys
3131
from logging.handlers import QueueHandler, QueueListener, RotatingFileHandler
32+
from logging import Logger
3233
from typing import NamedTuple
3334

3435
from statemachine import State, StateMachine
@@ -55,7 +56,7 @@ class LoggingConfig(NamedTuple):
5556
logging_dir: str
5657

5758

58-
class LoggingMachine(StateMachine):
59+
class LoggingMachine(StateMachine, Logger):
5960
"""Handles logger states for bittensor and 3rd party libraries."""
6061

6162
Default = State(initial=True)
@@ -360,58 +361,45 @@ def __trace_on__(self) -> bool:
360361
"""
361362
return self.current_state_value == "Trace"
362363

363-
@staticmethod
364-
def _concat_msg(*args):
365-
return " - ".join(str(el) for el in args if el != "")
366-
367-
def trace(self, msg="", *args, prefix="", suffix="", **kwargs):
364+
def trace(self, msg="", prefix="", suffix="", *args, **kwargs):
368365
"""Wraps trace message with prefix and suffix."""
369-
msg = self._concat_msg(prefix, msg, suffix)
370-
self._logger.trace(msg, *args, **kwargs, stacklevel=2)
366+
msg = f"{prefix} - {msg} - {suffix}"
367+
self._logger.trace(msg, *args, **kwargs)
371368

372-
def debug(self, msg="", *args, prefix="", suffix="", **kwargs):
369+
def debug(self, msg="", prefix="", suffix="", *args, **kwargs):
373370
"""Wraps debug message with prefix and suffix."""
374-
msg = self._concat_msg(prefix, msg, suffix)
375-
self._logger.debug(msg, *args, **kwargs, stacklevel=2)
371+
msg = f"{prefix} - {msg} - {suffix}"
372+
self._logger.debug(msg, *args, **kwargs)
376373

377-
def info(self, msg="", *args, prefix="", suffix="", **kwargs):
374+
def info(self, msg="", prefix="", suffix="", *args, **kwargs):
378375
"""Wraps info message with prefix and suffix."""
379-
msg = self._concat_msg(prefix, msg, suffix)
380-
self._logger.info(msg, *args, **kwargs, stacklevel=2)
376+
msg = f"{prefix} - {msg} - {suffix}"
377+
self._logger.info(msg, *args, **kwargs)
381378

382-
def success(self, msg="", *args, prefix="", suffix="", **kwargs):
379+
def success(self, msg="", prefix="", suffix="", *args, **kwargs):
383380
"""Wraps success message with prefix and suffix."""
384-
msg = self._concat_msg(prefix, msg, suffix)
385-
self._logger.success(msg, *args, **kwargs, stacklevel=2)
381+
msg = f"{prefix} - {msg} - {suffix}"
382+
self._logger.success(msg, *args, **kwargs)
386383

387-
def warning(self, msg="", *args, prefix="", suffix="", **kwargs):
384+
def warning(self, msg="", prefix="", suffix="", *args, **kwargs):
388385
"""Wraps warning message with prefix and suffix."""
389-
msg = self._concat_msg(prefix, msg, suffix)
390-
self._logger.warning(msg, *args, **kwargs, stacklevel=2)
386+
msg = f"{prefix} - {msg} - {suffix}"
387+
self._logger.warning(msg, *args, **kwargs)
391388

392-
def error(self, msg="", *args, prefix="", suffix="", **kwargs):
389+
def error(self, msg="", prefix="", suffix="", *args, **kwargs):
393390
"""Wraps error message with prefix and suffix."""
394-
msg = self._concat_msg(prefix, msg, suffix)
395-
self._logger.error(msg, *args, **kwargs, stacklevel=2)
391+
msg = f"{prefix} - {msg} - {suffix}"
392+
self._logger.error(msg, *args, **kwargs)
396393

397-
def critical(self, msg="", *args, prefix="", suffix="", **kwargs):
394+
def critical(self, msg="", prefix="", suffix="", *args, **kwargs):
398395
"""Wraps critical message with prefix and suffix."""
399-
msg = self._concat_msg(prefix, msg, suffix)
400-
self._logger.critical(msg, *args, **kwargs, stacklevel=2)
396+
msg = f"{prefix} - {msg} - {suffix}"
397+
self._logger.critical(msg, *args, **kwargs)
401398

402-
def exception(self, msg="", *args, prefix="", suffix="", **kwargs):
399+
def exception(self, msg="", prefix="", suffix="", *args, **kwargs):
403400
"""Wraps exception message with prefix and suffix."""
404-
msg = self._concat_msg(prefix, msg, suffix)
405-
stacklevel = 2
406-
if (
407-
sys.implementation.name == "cpython"
408-
and sys.version_info.major == 3
409-
and sys.version_info.minor < 11
410-
):
411-
# Note that, on CPython < 3.11, exception() calls through to
412-
# error() without adjusting stacklevel, so we have to increment it.
413-
stacklevel += 1
414-
self._logger.exception(msg, *args, **kwargs, stacklevel=stacklevel)
401+
msg = f"{prefix} - {msg} - {suffix}"
402+
self._logger.exception(msg, *args, **kwargs)
415403

416404
def on(self):
417405
"""Enable default state."""

bittensor/metagraph.py

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,11 @@
2525
import bittensor
2626
from os import listdir
2727
from os.path import join
28-
from typing import List, Optional, Union, Tuple
28+
from typing import List, Optional, Union, Tuple, cast
2929

3030
from bittensor.chain_data import AxonInfo
3131
from bittensor.utils.registration import torch, use_torch
32+
from bittensor.utils import weight_utils
3233

3334
METAGRAPH_STATE_DICT_NDARRAY_KEYS = [
3435
"version",
@@ -649,33 +650,40 @@ def _process_weights_or_bonds(
649650
650651
self.weights = self._process_weights_or_bonds(raw_weights_data, "weights")
651652
"""
652-
data_array = []
653+
data_array: list[Union[NDArray[np.float32], "torch.Tensor"]] = []
653654
for item in data:
654655
if len(item) == 0:
655656
if use_torch():
656-
data_array.append(torch.zeros(len(self.neurons))) # type: ignore
657+
data_array.append(torch.zeros(len(self.neurons)))
657658
else:
658-
data_array.append(np.zeros(len(self.neurons), dtype=np.float32)) # type: ignore
659+
data_array.append(np.zeros(len(self.neurons), dtype=np.float32))
659660
else:
660661
uids, values = zip(*item)
661662
# TODO: Validate and test the conversion of uids and values to tensor
662663
if attribute == "weights":
663664
data_array.append(
664-
bittensor.utils.weight_utils.convert_weight_uids_and_vals_to_tensor(
665+
weight_utils.convert_weight_uids_and_vals_to_tensor(
665666
len(self.neurons),
666667
list(uids),
667-
list(values), # type: ignore
668+
list(values),
668669
)
669670
)
670671
else:
671-
data_array.append(
672-
bittensor.utils.weight_utils.convert_bond_uids_and_vals_to_tensor( # type: ignore
673-
len(self.neurons), list(uids), list(values)
674-
).astype(np.float32)
672+
da_item = weight_utils.convert_bond_uids_and_vals_to_tensor(
673+
len(self.neurons), list(uids), list(values)
675674
)
675+
if use_torch():
676+
data_array.append(cast("torch.LongTensor", da_item))
677+
else:
678+
data_array.append(
679+
cast(NDArray[np.float32], da_item).astype(np.float32)
680+
)
676681
tensor_param: Union["torch.nn.Parameter", NDArray] = (
677682
(
678-
torch.nn.Parameter(torch.stack(data_array), requires_grad=False)
683+
torch.nn.Parameter(
684+
torch.stack(cast(list["torch.Tensor"], data_array)),
685+
requires_grad=False,
686+
)
679687
if len(data_array)
680688
else torch.nn.Parameter()
681689
)
@@ -731,7 +739,7 @@ def _process_root_weights(
731739
uids, values = zip(*item)
732740
# TODO: Validate and test the conversion of uids and values to tensor
733741
data_array.append(
734-
bittensor.utils.weight_utils.convert_root_weight_uids_and_vals_to_tensor( # type: ignore
742+
weight_utils.convert_root_weight_uids_and_vals_to_tensor( # type: ignore
735743
n_subnets, list(uids), list(values), subnets
736744
)
737745
)

tests/unit_tests/test_logging.py

Lines changed: 0 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
import os
2-
import re
31
import pytest
42
import multiprocessing
53
import logging as stdlogging
@@ -170,36 +168,3 @@ def test_all_log_levels_output(logging_machine, caplog):
170168
assert "Test warning" in caplog.text
171169
assert "Test error" in caplog.text
172170
assert "Test critical" in caplog.text
173-
174-
175-
def test_log_sanity(logging_machine, caplog):
176-
"""
177-
Test that logging is sane:
178-
- prefix and suffix work
179-
- format strings work
180-
- reported filename is correct
181-
Note that this is tested against caplog, which is not formatted the same as
182-
stdout.
183-
"""
184-
basemsg = "logmsg #%d, cookie: %s"
185-
cookie = "0ef852c74c777f8d8cc09d511323ce76"
186-
nfixtests = [
187-
{},
188-
{"prefix": "pref"},
189-
{"suffix": "suff"},
190-
{"prefix": "pref", "suffix": "suff"},
191-
]
192-
cookiejar = {}
193-
for i, nfix in enumerate(nfixtests):
194-
prefix = nfix.get("prefix", "")
195-
suffix = nfix.get("suffix", "")
196-
use_cookie = f"{cookie} #{i}#"
197-
logging_machine.info(basemsg, i, use_cookie, prefix=prefix, suffix=suffix)
198-
# Check to see if all elements are present, regardless of downstream formatting.
199-
expect = f"INFO.*{os.path.basename(__file__)}.* "
200-
if prefix != "":
201-
expect += prefix + " - "
202-
expect += basemsg % (i, use_cookie)
203-
if suffix != "":
204-
expect += " - " + suffix
205-
assert re.search(expect, caplog.text)

tests/unit_tests/test_metagraph.py

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,37 @@ def test_process_weights_or_bonds(mock_environment):
124124
# TODO: Add more checks to ensure the bonds have been processed correctly
125125

126126

127+
def test_process_weights_or_bonds_torch(
128+
mock_environment, force_legacy_torch_compat_api
129+
):
130+
_, neurons = mock_environment
131+
metagraph = bittensor.metagraph(1, sync=False)
132+
metagraph.neurons = neurons
133+
134+
# Test weights processing
135+
weights = metagraph._process_weights_or_bonds(
136+
data=[neuron.weights for neuron in neurons], attribute="weights"
137+
)
138+
assert weights.shape[0] == len(
139+
neurons
140+
) # Number of rows should be equal to number of neurons
141+
assert weights.shape[1] == len(
142+
neurons
143+
) # Number of columns should be equal to number of neurons
144+
# TODO: Add more checks to ensure the weights have been processed correctly
145+
146+
# Test bonds processing
147+
bonds = metagraph._process_weights_or_bonds(
148+
data=[neuron.bonds for neuron in neurons], attribute="bonds"
149+
)
150+
assert bonds.shape[0] == len(
151+
neurons
152+
) # Number of rows should be equal to number of neurons
153+
assert bonds.shape[1] == len(
154+
neurons
155+
) # Number of columns should be equal to number of neurons
156+
157+
127158
# Mocking the bittensor.subtensor class for testing purposes
128159
@pytest.fixture
129160
def mock_subtensor():

0 commit comments

Comments
 (0)