Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MAIN] add support for numpy > 2.0 #656

Open
wants to merge 31 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
23f8193
AttributeError: `np.infty` was removed in the NumPy 2.0 release. Use …
Moritz-Alexander-Kern Dec 10, 2024
49e7ed4
np. trapz to np.trapezoid
Moritz-Alexander-Kern Dec 10, 2024
3991ca6
np.NaN depr in favor of np.nan
Moritz-Alexander-Kern Dec 10, 2024
23047b9
np.Inf depr in favor of np.inf
Moritz-Alexander-Kern Dec 10, 2024
e2980f6
np.NaN depr in favor np.nan
Moritz-Alexander-Kern Dec 10, 2024
f001e25
np.trapz depr in favor of np.trapezoid
Moritz-Alexander-Kern Dec 10, 2024
e7fc540
fix tests for SpikeTriggeredPhase
Moritz-Alexander-Kern Dec 10, 2024
6d04d2a
fix icsd
Moritz-Alexander-Kern Jan 24, 2025
bfa4540
fix test for multitaper coherence
Moritz-Alexander-Kern Jan 24, 2025
c21c279
refactor test for TSP
Moritz-Alexander-Kern Jan 24, 2025
6672bb2
split unittest
Moritz-Alexander-Kern Jan 29, 2025
60d701b
max_occs_size_dur is uint, and will cause an overflow in arange, chan…
Moritz-Alexander-Kern Feb 12, 2025
309003a
update requirements
Moritz-Alexander-Kern Feb 12, 2025
7ba256e
Merge branch 'master' into support_numpy_2
Moritz-Alexander-Kern Feb 12, 2025
38595b8
remove empty line
Moritz-Alexander-Kern Feb 12, 2025
e120952
remove debugging statement
Moritz-Alexander-Kern Feb 12, 2025
240dabc
remove debugging code
Moritz-Alexander-Kern Feb 12, 2025
62cf1b0
fix asset doctest
Moritz-Alexander-Kern Feb 12, 2025
aa99fab
fix doctest in conversion
Moritz-Alexander-Kern Feb 12, 2025
6cbe9dc
fix doctest kernels
Moritz-Alexander-Kern Feb 12, 2025
0a1df94
fix doctest signal_processing
Moritz-Alexander-Kern Feb 12, 2025
4012dbe
fix doctest spike_train_correlation
Moritz-Alexander-Kern Feb 12, 2025
1b47f79
fix doctest spike_train_synchrony
Moritz-Alexander-Kern Feb 12, 2025
c545d6b
fix doctest cv
Moritz-Alexander-Kern Feb 12, 2025
a9daa14
fix doctest fano factor
Moritz-Alexander-Kern Feb 12, 2025
42b6ee2
fix doctest lv
Moritz-Alexander-Kern Feb 12, 2025
ceafbf1
fix doctest lvr
Moritz-Alexander-Kern Feb 12, 2025
5b44b3a
fix doctest mean firing rate
Moritz-Alexander-Kern Feb 12, 2025
e1b1b68
fix unitary event analysis
Moritz-Alexander-Kern Feb 12, 2025
873f89f
fix doctest waveform features
Moritz-Alexander-Kern Feb 12, 2025
7659dd4
fix doctest waveform_features
Moritz-Alexander-Kern Feb 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions elephant/asset/asset.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,14 +106,14 @@
The ASSET found the following sequences of synchronous events:

>>> pprint(sses)
{1: {(36, 2): {5},
(37, 4): {1},
(40, 6): {4},
(41, 7): {8},
(43, 9): {2},
(47, 14): {7},
(48, 15): {0},
(50, 17): {9}}}
{1: {(np.int64(36), np.int64(2)): {5},
(np.int64(37), np.int64(4)): {1},
(np.int64(40), np.int64(6)): {4},
(np.int64(41), np.int64(7)): {8},
(np.int64(43), np.int64(9)): {2},
(np.int64(47), np.int64(14)): {7},
(np.int64(48), np.int64(15)): {0},
(np.int64(50), np.int64(17)): {9}}}

To visualize them, refer to Viziphant documentation and an example plot
:func:`viziphant.asset.plot_synchronous_events`.
Expand Down
2 changes: 1 addition & 1 deletion elephant/causality/granger.py
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ def _optimal_vector_arm(signals, dimension, max_order,

length = np.size(signals[0])

optimal_ic = np.infty
optimal_ic = np.inf
optimal_order = 1
optimal_coeffs = np.zeros((dimension, dimension, optimal_order))
optimal_cov_matrix = np.zeros((dimension, dimension))
Expand Down
2 changes: 1 addition & 1 deletion elephant/conversion.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@
Check the correctness of a spike trains realosation

>>> BinnedSpikeTrain(bst.to_spike_trains(), bin_size=bst.bin_size) == bst
True
np.True_

Rescale the units of a binned spike train without changing the data.

Expand Down
2 changes: 1 addition & 1 deletion elephant/current_source_density_src/icsd.py
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,7 @@ def get_f_matrix(self):
self.coord_electrode[i])**2 + (self.diam[j] / 2)**2)-
abs(self.coord_electrode[j] + self.coord_electrode[i])))

f_matrix /= (2 * self.sigma)
f_matrix = f_matrix / (2 * self.sigma)
return f_matrix
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.



Expand Down
6 changes: 3 additions & 3 deletions elephant/kernels.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@
Cumulative Distribution Function

>>> kernel.cdf(0 * pq.s)
0.5
np.float64(0.5)
>>> kernel.cdf(1 * pq.s)
0.9995709396668032
np.float64(0.9995709396668032)

Inverse Cumulative Distribution Function

Expand All @@ -63,7 +63,7 @@
>>> kernel(spiketrain)
array([-0. , 0. , 0.48623347]) * 1/s
>>> kernel.cdf(0 * pq.s)
0.0
np.float64(0.0)
>>> kernel.icdf(0.5)
array(1.18677054) * s

Expand Down
4 changes: 2 additions & 2 deletions elephant/signal_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -475,7 +475,7 @@ def butter(signal, highpass_frequency=None, lowpass_frequency=None, order=4,
...,
[ 1.12088277e-01],
[-3.11053132e-01],
[ 2.63563988e-03]]) * mV, [0.0 s, 5.0 s], sampling rate: 1000.0 Hz)>
[ 2.63563988e-03]], shape=(5000, 1)) * mV, [0.0 s, 5.0 s], sampling rate: 1000.0 Hz)>


Let's check that the normal noise power spectrum at zero frequency is close
Expand Down Expand Up @@ -964,7 +964,7 @@ def rauc(signal, baseline=None, bin_duration=None, t_start=None, t_stop=None):
sig_binned = sig_binned.reshape(n_bins, samples_per_bin, n_channels)

# rectify and integrate over each bin
rauc = np.trapz(np.abs(sig_binned), dx=signal.sampling_period, axis=1)
rauc = np.trapezoid(np.abs(sig_binned), dx=signal.sampling_period, axis=1)

if n_bins == 1:
# return a single value for each channel
Expand Down
2 changes: 1 addition & 1 deletion elephant/spade.py
Original file line number Diff line number Diff line change
Expand Up @@ -1450,7 +1450,7 @@ def _get_pvalue_spec(max_occs, min_spikes, max_spikes, min_occ, n_surr, winlen,
counts, occs = np.histogram(
max_occs_size_dur,
bins=np.arange(min_occ,
np.max(max_occs_size_dur) + 2))
np.max(max_occs_size_dur).astype(np.int16) + 2))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_occs_size_dur is uint which will cause an overflow in arange.

occs = occs[:-1].astype(np.uint16)
pvalues = np.cumsum(counts[::-1])[::-1] / n_surr
for occ_id, occ in enumerate(occs):
Expand Down
2 changes: 1 addition & 1 deletion elephant/spike_train_correlation.py
Original file line number Diff line number Diff line change
Expand Up @@ -884,7 +884,7 @@ def spike_time_tiling_coefficient(spiketrain_i: neo.core.SpikeTrain,
>>> spiketrain2 = neo.SpikeTrain([1.02, 2.71, 18.82, 28.46, 28.79, 43.6],
... units='ms', t_stop=50)
>>> spike_time_tiling_coefficient(spiketrain1, spiketrain2)
0.4958601655933762
np.float64(0.4958601655933762)

"""
# input checks
Expand Down
2 changes: 1 addition & 1 deletion elephant/spike_train_synchrony.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ def spike_contrast(spiketrains, t_start=None, t_stop=None,
>>> spiketrain_2 = StationaryPoissonProcess(rate=20*pq.Hz,
... t_stop=1000*pq.ms).generate_spiketrain()
>>> round(spike_contrast([spiketrain_1, spiketrain_2]),3)
0.419
np.float64(0.419)

"""
if not 0. < bin_shrink_factor < 1.:
Expand Down
12 changes: 6 additions & 6 deletions elephant/statistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ def mean_firing_rate(spiketrain, t_start=None, t_stop=None, axis=None):
--------
>>> from elephant import statistics
>>> statistics.mean_firing_rate([0.3, 4.5, 6.7, 9.3])
0.4301075268817204
np.float64(0.4301075268817204)

"""
if isinstance(spiketrain, neo.SpikeTrain) and t_start is None \
Expand Down Expand Up @@ -325,7 +325,7 @@ def fanofactor(spiketrains, warn_tolerance=0.1 * pq.ms):
... neo.SpikeTrain([1.4, 3.3, 8.2], t_stop=10, units='s')
... ]
>>> statistics.fanofactor(spiketrains)
0.07142857142857142
np.float64(0.07142857142857142)

"""
# Build array of spike counts (one per spike train)
Expand Down Expand Up @@ -365,7 +365,7 @@ def __variation_check(v, with_nan):
warnings.warn("The input size is too small. Please provide"
"an input with more than 1 entry. Returning `NaN`"
"since the argument `with_nan` is `True`")
return np.NaN
return np.nan
raise ValueError("Input size is too small. Please provide "
"an input with more than 1 entry. Set 'with_nan' "
"to True to replace the error by a warning.")
Expand Down Expand Up @@ -427,7 +427,7 @@ def cv2(time_intervals, with_nan=False):
--------
>>> from elephant import statistics
>>> statistics.cv2([0.3, 4.5, 6.7, 9.3])
0.8226190476190478
np.float64(0.8226190476190478)

"""
# convert to array, cast to float
Expand Down Expand Up @@ -495,7 +495,7 @@ def lv(time_intervals, with_nan=False):
--------
>>> from elephant import statistics
>>> statistics.lv([0.3, 4.5, 6.7, 9.3])
0.8306154336734695
np.float64(0.8306154336734695)

"""
# convert to array, cast to float
Expand Down Expand Up @@ -569,7 +569,7 @@ def lvr(time_intervals, R=5*pq.ms, with_nan=False):
--------
>>> from elephant import statistics
>>> statistics.lvr([0.3, 4.5, 6.7, 9.3], R=0.005)
0.833907445980624
np.float64(0.833907445980624)
"""
if isinstance(R, pq.Quantity):
R = R.rescale('ms').magnitude
Expand Down
2 changes: 1 addition & 1 deletion elephant/test/test_gpfa.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ def test_get_seq_sqrt(self):
self.assertEqual(seqs['y'].shape, seqs_not_sqrt['y'].shape)

def test_cut_trials_inf(self):
same_data = gpfa_util.cut_trials(self.data2, seg_length=np.Inf)
same_data = gpfa_util.cut_trials(self.data2, seg_length=np.inf)
assert same_data is self.data2

def test_cut_trials_zero_length(self):
Expand Down
10 changes: 5 additions & 5 deletions elephant/test/test_phase_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
import quantities as pq
import scipy.io
from neo import SpikeTrain, AnalogSignal
from numpy.ma.testutils import assert_allclose
from numpy.testing import assert_allclose

import elephant.phase_analysis
from elephant.datasets import download_datasets
Expand Down Expand Up @@ -45,7 +45,7 @@ def test_perfect_locking_one_spiketrain_one_signal(self):
interpolate=True)

assert_allclose(phases[0], - np.pi / 2.)
assert_allclose(amps[0], 1, atol=0.1)
assert_allclose(amps[0].magnitude, 1, atol=0.1)
assert_allclose(times[0].magnitude, self.st0.magnitude)
self.assertEqual(len(phases[0]), len(self.st0))
self.assertEqual(len(amps[0]), len(self.st0))
Expand All @@ -60,7 +60,7 @@ def test_perfect_locking_many_spiketrains_many_signals(self):
interpolate=True)

assert_allclose(phases[0], -np.pi / 2.)
assert_allclose(amps[0], 1, atol=0.1)
assert_allclose(amps[0].magnitude, 1, atol=0.1)
assert_allclose(times[0].magnitude, self.st0.magnitude)
self.assertEqual(len(phases[0]), len(self.st0))
self.assertEqual(len(amps[0]), len(self.st0))
Expand All @@ -75,7 +75,7 @@ def test_perfect_locking_one_spiketrains_many_signals(self):
interpolate=True)

assert_allclose(phases[0], -np.pi / 2.)
assert_allclose(amps[0], 1, atol=0.1)
assert_allclose(amps[0].magnitude, 1, atol=0.1)
assert_allclose(times[0].magnitude, self.st0.magnitude)
self.assertEqual(len(phases[0]), len(self.st0))
self.assertEqual(len(amps[0]), len(self.st0))
Expand All @@ -88,7 +88,7 @@ def test_perfect_locking_many_spiketrains_one_signal(self):
interpolate=True)

assert_allclose(phases[0], -np.pi / 2.)
assert_allclose(amps[0], 1, atol=0.1)
assert_allclose(amps[0].magnitude, 1, atol=0.1)
assert_allclose(times[0].magnitude, self.st0.magnitude)
self.assertEqual(len(phases[0]), len(self.st0))
self.assertEqual(len(amps[0]), len(self.st0))
Expand Down
3 changes: 2 additions & 1 deletion elephant/test/test_spade.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ def test_spade_msip_3d(self):
assert_equal(lags_msip, self.lags_msip)

# test under different configuration of parameters than the default one
def test_parameters_3d(self):
def test_parameters_3d_min_spikes(self):
# test min_spikes parameter
output_msip_min_spikes = spade.spade(
self.msip,
Expand Down Expand Up @@ -438,6 +438,7 @@ def test_parameters_3d(self):
el for el in self.elements_msip if len(el) >= self.min_neu and len(
el) >= self.min_spikes])

def test_parameters_3d_min_occ(self):
# test min_occ parameter
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

split test in two tests to be more precise when debugging.

output_msip_min_occ = spade.spade(
self.msip,
Expand Down
10 changes: 5 additions & 5 deletions elephant/test/test_spectral.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
import scipy.fft
import scipy.signal as spsig
from neo import AnalogSignal
from numpy.testing import assert_array_equal
from numpy.testing import assert_array_equal, assert_array_almost_equal
from packaging import version

import elephant.spectral
Expand Down Expand Up @@ -569,8 +569,8 @@ def test_multitaper_cross_spectrum_behavior(self):
elephant.spectral.multitaper_cross_spectrum(
data.magnitude.T, fs=1 / sampling_period,
peak_resolution=peak_res)
self.assertTrue((freqs == freqs_np).all()
and (cross_spec == cross_spec_np).all())
assert_array_equal(freqs.magnitude, freqs_np)
assert_array_almost_equal(cross_spec.magnitude, cross_spec_np)

# one-sided vs two-sided spectrum
freqs_os, cross_spec_os = \
Expand Down Expand Up @@ -929,8 +929,8 @@ def test_multitaper_coherence_input_types(self):
anasig_signal_j)

np.testing.assert_array_equal(arr_f, anasig_f)
np.testing.assert_allclose(arr_coh, anasig_coh, atol=1e-6)
np.testing.assert_array_equal(arr_phi, anasig_phi)
np.testing.assert_allclose(arr_coh, anasig_coh.magnitude, atol=1e-6)
np.testing.assert_array_almost_equal(arr_phi, anasig_phi.magnitude)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

investigate origin of difference


def test_multitaper_cohere_peak(self):
# Generate dummy data
Expand Down
2 changes: 1 addition & 1 deletion elephant/test/test_spike_train_correlation.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ def test_empty_spike_train(self):
result = sc.correlation_coefficient(binned_12, fast=False)

# test for NaNs in the output array
target = np.zeros((2, 2)) * np.NaN
target = np.zeros((2, 2)) * np.nan
target[0, 0] = 1.0
assert_array_almost_equal(result, target)

Expand Down
27 changes: 14 additions & 13 deletions elephant/test/test_total_spiking_probability_edges.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,25 +87,26 @@ def test_total_spiking_probability_edges(self):
"ER10/new_sim0_100.mat",
"ER15/new_sim0_100.mat",
]
repo_base_path = 'unittest/functional_connectivity/' \
'total_spiking_probability_edges/data/'

for datafile in files:
repo_base_path = 'unittest/functional_connectivity/' \
'total_spiking_probability_edges/data/'
downloaded_dataset_path = download_datasets(repo_base_path +
datafile)
with self.subTest(datafile=datafile):
downloaded_dataset_path = download_datasets(repo_base_path +
datafile)

spiketrains, original_data = load_spike_train_simulated(
downloaded_dataset_path)
spiketrains, original_data = load_spike_train_simulated(
downloaded_dataset_path)

connectivity_matrix, delay_matrix = \
total_spiking_probability_edges(spiketrains)
connectivity_matrix, delay_matrix = \
total_spiking_probability_edges(spiketrains)

# Remove self-connections
np.fill_diagonal(connectivity_matrix, 0)
# Remove self-connections
np.fill_diagonal(connectivity_matrix, 0)

_, _, _, auc = roc_curve(connectivity_matrix, original_data)
_, _, _, auc = roc_curve(connectivity_matrix, original_data)

self.assertGreater(auc, 0.95)
self.assertGreater(auc, 0.95)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use sub tests to ensure garbage collection is carried out in between. Otherwise excessive amounts of memory are used.

# ====== HELPER FUNCTIONS ======

Expand Down Expand Up @@ -173,7 +174,7 @@ def roc_curve(estimate, original):
tpr_list.append(sensitivity(*conf_matrix))
fpr_list.append(fall_out(*conf_matrix))

auc = np.trapz(tpr_list, fpr_list)
auc = np.trapezoid(tpr_list, fpr_list)

return tpr_list, fpr_list, thresholds, auc

Expand Down
2 changes: 1 addition & 1 deletion elephant/unitary_event_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def hash_from_pattern(m, base=2):

>>> import numpy as np
>>> hash_from_pattern([0, 1, 1])
3
np.int64(3)

>>> import numpy as np
>>> m = np.array([[0, 1, 0, 0, 1, 1, 0, 1],
Expand Down
4 changes: 2 additions & 2 deletions elephant/waveform_features.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def waveform_width(waveform, cutoff=0.75):
--------
>>> from elephant.waveform_features import waveform_width
>>> waveform_width([20, 25, 10, -5, -2, 7, 15], cutoff=0.75)
3
np.int64(3)

"""
waveform = np.squeeze(waveform)
Expand Down Expand Up @@ -116,7 +116,7 @@ def waveform_snr(waveforms):
>>> from elephant.waveform_features import waveform_snr
>>> waveforms = [[20, 25, 10, -5, -2, 7, 15], [17, 29, 11, -4, 0, 5, 20]]
>>> waveform_snr(waveforms)
12.249999999999998
np.float64(12.249999999999998)

"""
if isinstance(waveforms, neo.SpikeTrain):
Expand Down
2 changes: 1 addition & 1 deletion requirements/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
neo>=0.10.0
numpy>=1.19.5, <2
numpy>=1.19.5
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy>2.0 ?

quantities>=0.14.1
scipy>=1.10.0
six>=1.10.0
Expand Down
Loading