Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix copy-view issue in epochs #12121

Merged
merged 16 commits into from
Nov 9, 2023
Merged
2 changes: 2 additions & 0 deletions doc/changes/devel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ Bugs
- Fix bug where ``encoding`` argument was ignored when reading annotations from an EDF file (:gh:`11958` by :newcontrib:`Andrew Gilbert`)
- Mark tests ``test_adjacency_matches_ft`` and ``test_fetch_uncompressed_file`` as network tests (:gh:`12041` by :newcontrib:`Maksym Balatsko`)
- Fix bug with :func:`mne.channels.read_ch_adjacency` (:gh:`11608` by :newcontrib:`Ivan Zubarev`)
- Fix bug where ``epochs.get_data(..., scalings=...)`` would errantly modify the preloaded data (:gh:`12121` by :newcontrib:`Pablo Mainar` and `Eric Larson`_)
- Fix bugs with saving splits for :class:`~mne.Epochs` (:gh:`11876` by `Dmitrii Altukhov`_)
- Fix bug with multi-plot 3D rendering where only one plot was updated (:gh:`11896` by `Eric Larson`_)
- Fix bug where ``verbose`` level was not respected inside parallel jobs (:gh:`12154` by `Eric Larson`_)
Expand Down Expand Up @@ -92,6 +93,7 @@ Bugs

API changes
~~~~~~~~~~~
- The default for :meth:`mne.Epochs.get_data` of ``copy=False`` will change to ``copy=True`` in 1.7. Set it explicitly to avoid a warning (:gh:`12121` by :newcontrib:`Pablo Mainar` and `Eric Larson`_)
- ``mne.preprocessing.apply_maxfilter`` and ``mne maxfilter`` have been deprecated and will be removed in 1.7. Use :func:`mne.preprocessing.maxwell_filter` (see :ref:`this tutorial <tut-artifact-sss>`) in Python or the command-line utility from MEGIN ``maxfilter`` and :func:`mne.bem.fit_sphere_to_headshape` instead (:gh:`11938` by `Eric Larson`_)
- :func:`mne.io.kit.read_mrk` reading pickled files is deprecated using something like ``np.savetxt(fid, pts, delimiter="\t", newline="\n")`` to save your points instead (:gh:`11937` by `Eric Larson`_)
- Replace legacy ``inst.pick_channels`` and ``inst.pick_types`` with ``inst.pick`` (where ``inst`` is an instance of :class:`~mne.io.Raw`, :class:`~mne.Epochs`, or :class:`~mne.Evoked`) wherever possible (:gh:`11907` by `Clemens Brunner`_)
Expand Down
2 changes: 2 additions & 0 deletions doc/changes/names.inc
Original file line number Diff line number Diff line change
Expand Up @@ -406,6 +406,8 @@

.. _Pablo-Arias: https://github.com/Pablo-Arias

.. _Pablo Mainar: https://github.com/pablomainar

.. _Padma Sundaram: https://www.nmr.mgh.harvard.edu/user/8071

.. _Paul Pasler: https://github.com/ppasler
Expand Down
1 change: 1 addition & 0 deletions examples/datasets/kernel_phantom.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@

# %%
# The data covariance has an interesting structure because of densely packed sensors:

cov = mne.compute_covariance(epochs, tmax=-0.01)
mne.viz.plot_cov(cov, raw.info)

Expand Down
1 change: 0 additions & 1 deletion examples/datasets/limo_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@
# License: BSD-3-Clause

# %%

import matplotlib.pyplot as plt
import numpy as np

Expand Down
4 changes: 2 additions & 2 deletions examples/decoding/decoding_csp_eeg.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@

# Define a monte-carlo cross-validation generator (reduce variance):
scores = []
epochs_data = epochs.get_data()
epochs_data_train = epochs_train.get_data()
epochs_data = epochs.get_data(copy=False)
epochs_data_train = epochs_train.get_data(copy=False)
cv = ShuffleSplit(10, test_size=0.2, random_state=42)
cv_split = cv.split(epochs_data_train)

Expand Down
4 changes: 2 additions & 2 deletions examples/decoding/decoding_csp_timefreq.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])

X = epochs.get_data()
X = epochs.get_data(copy=False)

# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(
Expand Down Expand Up @@ -165,7 +165,7 @@
w_tmax = w_time + w_size / 2.0

# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
X = epochs.get_data(tmin=w_tmin, tmax=w_tmax, copy=False)

# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@

# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs["Left"].get_data(), y=epochs["Left"].events[:, 2] > 2)
time_gen.fit(X=epochs["Left"].get_data(copy=False), y=epochs["Left"].events[:, 2] > 2)

# %%
# Score on the epochs where the stimulus was presented to the right.
scores = time_gen.score(
X=epochs["Right"].get_data(), y=epochs["Right"].events[:, 2] > 2
X=epochs["Right"].get_data(copy=False), y=epochs["Right"].events[:, 2] > 2
)

# %%
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
verbose=False,
)

X = epochs.get_data()
X = epochs.get_data(copy=False)

##############################################################################
# Transform data with PCA computed on the average ie evoked response
Expand Down
2 changes: 1 addition & 1 deletion examples/decoding/ems_filtering.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
epochs.pick("grad")

# Setup the data to use it a scikit-learn way:
X = epochs.get_data() # The MEG data
X = epochs.get_data(copy=False) # The MEG data
y = epochs.events[:, 2] # The conditions indices
n_epochs, n_channels, n_times = X.shape

Expand Down
2 changes: 1 addition & 1 deletion examples/decoding/linear_model_patterns.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@

# get MEG data
meg_epochs = epochs.copy().pick(picks="meg", exclude="bads")
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
meg_data = meg_epochs.get_data(copy=False).reshape(len(labels), -1)

# %%
# Decoding in sensor space using a LogisticRegression classifier
Expand Down
2 changes: 1 addition & 1 deletion examples/decoding/ssd_spatial_filters.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@
h_trans_bandwidth=1,
),
)
ssd_epochs.fit(X=epochs.get_data())
ssd_epochs.fit(X=epochs.get_data(copy=False))

# Plot topographies.
pattern_epochs = mne.EvokedArray(data=ssd_epochs.patterns_[:4].T, info=ssd_epochs.info)
Expand Down
3 changes: 1 addition & 2 deletions examples/preprocessing/otp.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# License: BSD-3-Clause

# %%

import numpy as np

import mne
Expand Down Expand Up @@ -70,7 +69,7 @@ def compute_bias(raw):
sphere = mne.make_sphere_model(r0=(0.0, 0.0, 0.0), head_radius=None, verbose=False)
cov = mne.compute_covariance(epochs, tmax=0, method="oas", rank=None, verbose=False)
idx = epochs.time_as_index(0.036)[0]
data = epochs.get_data()[:, :, idx].T
data = epochs.get_data(copy=False)[:, :, idx].T
evoked = mne.EvokedArray(data, epochs.info, tmin=0.0)
dip = fit_dipole(evoked, cov, sphere, n_jobs=None, verbose=False)[0]
actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]
Expand Down
15 changes: 8 additions & 7 deletions examples/stats/sensor_regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,6 @@
of the words for which we have EEG activity.

For the general methodology, see e.g. :footcite:`HaukEtAl2006`.

References
----------
.. footbibliography::
"""
# Authors: Tal Linzen <[email protected]>
# Denis A. Engemann <[email protected]>
Expand All @@ -43,7 +39,7 @@
epochs = mne.read_epochs(path)
print(epochs.metadata.head())

##############################################################################
# %%
# Psycholinguistically relevant word characteristics are continuous. I.e.,
# concreteness or imaginability is a graded property. In the metadata,
# we have concreteness ratings on a 5-point scale. We can show the dependence
Expand All @@ -59,7 +55,7 @@
evokeds, colors=colors, split_legend=True, cmap=(name + " Percentile", "viridis")
)

##############################################################################
# %%
# We observe that there appears to be a monotonic dependence of EEG on
# concreteness. We can also conduct a continuous analysis: single-trial level
# regression with concreteness as a continuous (although here, binned)
Expand All @@ -72,7 +68,7 @@
title=cond, ts_args=dict(time_unit="s"), topomap_args=dict(time_unit="s")
)

##############################################################################
# %%
# Because the :func:`~mne.stats.linear_regression` function also estimates
# p values, we can --
# after applying FDR correction for multiple comparisons -- also visualise the
Expand All @@ -85,3 +81,8 @@
reject_H0, fdr_pvals = fdr_correction(res["Concreteness"].p_val.data)
evoked = res["Concreteness"].beta
evoked.plot_image(mask=reject_H0, time_unit="s")

# %%
# References
# ----------
# .. footbibliography::
14 changes: 5 additions & 9 deletions mne/_fiff/tests/test_reference.py
Original file line number Diff line number Diff line change
Expand Up @@ -620,12 +620,10 @@ def test_add_reference():
assert_equal(epochs_ref._data.shape[1], epochs._data.shape[1] + 1)
_check_channel_names(epochs_ref, "Ref")
ref_idx = epochs_ref.ch_names.index("Ref")
ref_data = epochs_ref.get_data()[:, ref_idx, :]
ref_data = epochs_ref.get_data(picks=[ref_idx])[:, 0]
assert_array_equal(ref_data, 0)
picks_eeg = pick_types(epochs.info, meg=False, eeg=True)
assert_array_equal(
epochs.get_data()[:, picks_eeg, :], epochs_ref.get_data()[:, picks_eeg, :]
)
assert_array_equal(epochs.get_data(picks_eeg), epochs_ref.get_data(picks_eeg))

# add two reference channels to epochs
raw = read_raw_fif(fif_fname, preload=True)
Expand All @@ -650,12 +648,10 @@ def test_add_reference():
ref_idy = epochs_ref.ch_names.index("M2")
assert_equal(epochs_ref.info["chs"][ref_idx]["ch_name"], "M1")
assert_equal(epochs_ref.info["chs"][ref_idy]["ch_name"], "M2")
ref_data = epochs_ref.get_data()[:, [ref_idx, ref_idy], :]
ref_data = epochs_ref.get_data([ref_idx, ref_idy])
assert_array_equal(ref_data, 0)
picks_eeg = pick_types(epochs.info, meg=False, eeg=True)
assert_array_equal(
epochs.get_data()[:, picks_eeg, :], epochs_ref.get_data()[:, picks_eeg, :]
)
assert_array_equal(epochs.get_data(picks_eeg), epochs_ref.get_data(picks_eeg))

# add reference channel to evoked
raw = read_raw_fif(fif_fname, preload=True)
Expand Down Expand Up @@ -725,7 +721,7 @@ def test_add_reference():
data = data.get_data()
epochs = make_fixed_length_epochs(raw).load_data()
data_2 = epochs.copy().add_reference_channels(["REF"]).pick(picks="eeg")
data_2 = data_2.get_data()[0]
data_2 = data_2.get_data(copy=False)[0]
assert_allclose(data, data_2)
evoked = epochs.average()
data_3 = evoked.copy().add_reference_channels(["REF"]).pick(picks="eeg")
Expand Down
2 changes: 1 addition & 1 deletion mne/beamformer/_dics.py
Original file line number Diff line number Diff line change
Expand Up @@ -493,7 +493,7 @@ def apply_dics_epochs(epochs, filters, return_generator=False, verbose=None):
tmin = epochs.times[0]

sel = _check_channels_spatial_filter(epochs.ch_names, filters)
data = epochs.get_data()[:, sel, :]
data = epochs.get_data(sel)

stcs = _apply_dics(data=data, filters=filters, info=info, tmin=tmin)

Expand Down
2 changes: 1 addition & 1 deletion mne/beamformer/_lcmv.py
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ def apply_lcmv_epochs(epochs, filters, *, return_generator=False, verbose=None):
tmin = epochs.times[0]

sel = _check_channels_spatial_filter(epochs.ch_names, filters)
data = epochs.get_data()[:, sel, :]
data = epochs.get_data(sel)
stcs = _apply_lcmv(data=data, filters=filters, info=info, tmin=tmin)

if not return_generator:
Expand Down
5 changes: 4 additions & 1 deletion mne/channels/channels.py
Original file line number Diff line number Diff line change
Expand Up @@ -1898,7 +1898,10 @@ def combine_channels(
ch_idx = list(range(inst.info["nchan"]))
ch_names = inst.info["ch_names"]
ch_types = inst.get_channel_types()
inst_data = inst.data if isinstance(inst, Evoked) else inst.get_data()
kwargs = dict()
if isinstance(inst, BaseEpochs):
kwargs["copy"] = False
inst_data = inst.get_data(**kwargs)
groups = OrderedDict(deepcopy(groups))

# Convert string values of ``method`` into callables
Expand Down
2 changes: 1 addition & 1 deletion mne/channels/tests/test_channels.py
Original file line number Diff line number Diff line change
Expand Up @@ -615,7 +615,7 @@ def test_equalize_channels():
assert raw2.ch_names == ["CH1", "CH2"]
assert_array_equal(raw2.get_data(), [[1.0], [2.0]])
assert epochs2.ch_names == ["CH1", "CH2"]
assert_array_equal(epochs2.get_data(), [[[3.0], [2.0]]])
assert_array_equal(epochs2.get_data(copy=False), [[[3.0], [2.0]]])
assert cov2.ch_names == ["CH1", "CH2"]
assert cov2["bads"] == cov["bads"]
assert ave2.ch_names == ave.ch_names
Expand Down
4 changes: 2 additions & 2 deletions mne/channels/tests/test_interpolation.py
Original file line number Diff line number Diff line change
Expand Up @@ -232,10 +232,10 @@ def test_interpolation_meg():
assert len(raw_meg.info["bads"]) == len(raw_meg.info["bads"])

# MEG -- epochs
data1 = epochs_meg.get_data()[:, pick, :].ravel()
data1 = epochs_meg.get_data(pick).ravel()
epochs_meg.info.normalize_proj()
epochs_meg.interpolate_bads(mode="fast")
data2 = epochs_meg.get_data()[:, pick, :].ravel()
data2 = epochs_meg.get_data(pick).ravel()
assert np.corrcoef(data1, data2)[0, 1] > thresh
assert len(epochs_meg.info["bads"]) == 0

Expand Down
2 changes: 1 addition & 1 deletion mne/decoding/tests/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ def test_get_coef_multiclass_full(n_classes, n_channels, n_times):
)
scorer = "roc_auc_ovr_weighted"
time_gen = GeneralizingEstimator(clf, scorer, verbose=True)
X = epochs.get_data()
X = epochs.get_data(copy=False)
y = epochs.events[:, 2]
n_splits = 3
cv = StratifiedKFold(n_splits=n_splits)
Expand Down
6 changes: 3 additions & 3 deletions mne/decoding/tests/test_csp.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def test_csp():
preload=True,
proj=False,
)
epochs_data = epochs.get_data()
epochs_data = epochs.get_data(copy=False)
n_channels = epochs_data.shape[1]
y = epochs.events[:, -1]

Expand Down Expand Up @@ -182,7 +182,7 @@ def test_csp():
proj=False,
preload=True,
)
epochs_data = epochs.get_data()
epochs_data = epochs.get_data(copy=False)
n_channels = epochs_data.shape[1]

n_channels = epochs_data.shape[1]
Expand Down Expand Up @@ -256,7 +256,7 @@ def test_regularized_csp():
epochs = Epochs(
raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), preload=True
)
epochs_data = epochs.get_data()
epochs_data = epochs.get_data(copy=False)
n_channels = epochs_data.shape[1]

n_components = 3
Expand Down
2 changes: 1 addition & 1 deletion mne/decoding/tests/test_ems.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def test_ems():
raw.close()

# EMS transformer, check that identical to compute_ems
X = epochs.get_data()
X = epochs.get_data(copy=False)
y = epochs.events[:, 2]
X = X / np.std(X) # X scaled outside cv in compute_ems
Xt, coefs = list(), list()
Expand Down
10 changes: 5 additions & 5 deletions mne/decoding/tests/test_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def test_scaler(info, method):
epochs = Epochs(
raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), preload=True
)
epochs_data = epochs.get_data()
epochs_data = epochs.get_data(copy=False)
y = epochs.events[:, -1]

epochs_data_t = epochs_data.transpose([1, 0, 2])
Expand Down Expand Up @@ -115,7 +115,7 @@ def test_scaler(info, method):
picks=np.arange(len(raw.ch_names)),
) # non-data chs
scaler = Scaler(epochs_bad.info, None)
pytest.raises(ValueError, scaler.fit, epochs_bad.get_data(), y)
pytest.raises(ValueError, scaler.fit, epochs_bad.get_data(copy=False), y)


def test_filterestimator():
Expand All @@ -129,7 +129,7 @@ def test_filterestimator():
epochs = Epochs(
raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), preload=True
)
epochs_data = epochs.get_data()
epochs_data = epochs.get_data(copy=False)

# Add tests for different combinations of l_freq and h_freq
filt = FilterEstimator(epochs.info, l_freq=40, h_freq=80)
Expand Down Expand Up @@ -180,7 +180,7 @@ def test_psdestimator():
epochs = Epochs(
raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), preload=True
)
epochs_data = epochs.get_data()
epochs_data = epochs.get_data(copy=False)
psd = PSDEstimator(2 * np.pi, 0, np.inf)
y = epochs.events[:, -1]
X = psd.fit_transform(epochs_data, y)
Expand Down Expand Up @@ -244,7 +244,7 @@ def test_unsupervised_spatial_filter():
pytest.raises(ValueError, UnsupervisedSpatialFilter, KernelRidge(2))

# Test fit
X = epochs.get_data()
X = epochs.get_data(copy=False)
n_components = 4
usf = UnsupervisedSpatialFilter(PCA(n_components))
usf.fit(X)
Expand Down
Loading
Loading