Unwanted data rejection when running mne.Epochs
See original GitHub issueDear MNE developers,
The bug
While running the function mne.Epochs, the function is dropping epochs. I’m not asking it to reject data (as far as I can tell) and I can not seem to get it to stop dropping them. Right now, it’s rejecting almost all my data, leaving me with only 2 trials.
Code snippet
I’m running the function mne.Epochs using the following input parameters:
epochs = mne.Epochs(raw, events, config.event_id, config.tmin, config.tmax, proj=True, picks=picks, baseline=config.baseline, preload=True, decim=config.decim, reject=None)
Where config.event_id = {‘2s’: 2, ‘4s’: 4, ‘8s’: 8, ‘16s’: 16, ‘Inf’: 32} config.tmin = -1 config.tmax = 2 config.baseline = (None, 0) config.decim = 1
Code
You can download the variables raw and events used in this code snippet from here: https://we.tl/t-YT29hjZKzf
Expected results
I’m expecting the function to cut the data into epochs, without rejecting data.
Actual results
The function runs fine but gives the following (surprising) output:
31 matching events found Applying baseline correction (mode: mean) Not setting metadata Created an SSP operator (subspace dimension = 1) 1 projection items activated Loading data for 31 events and 3001 original time points … 29 bad epochs dropped
Additional information
Output of mne.sys_info()
:
Platform: Windows-10-10.0.14393-SP0 Python: 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] Executable: C:\ProgramData\Anaconda2\envs\mne\pythonw.exe CPU: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel: 8 cores Memory: 15.9 GB
mne: 0.16.1 numpy: 1.14.3 {blas=mkl_rt, lapack=mkl_rt} scipy: 1.1.0 matplotlib: 2.2.2 {backend=Qt5Agg}
sklearn: 0.19.1 nibabel: 2.2.1 mayavi: 4.6.0 {qt_api=pyqt5} pycuda: Not found skcuda: Not found pandas: 0.23.0
Thanks!
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (5 by maintainers)
Top GitHub Comments
@drammock feel free to add this to a point of emphasis in the doc work
Right, thank you @agramfort! That makes sense.