BUG: Scaling when reading with brainvision
See original GitHub issueWhen reading BrainVision Recorder files, the EEG values for my data seemed off by 1e6 decimal points. BV data is usually saved in microVolts, so at first I thought that perhaps MNE is only working in Volts and there was an implicit scaling - however, there is no documentation about this. Furthermore, the docstring of the mne.io.read_raw_brainvision
function explicitly states the resulting unit to be microvolts.
I tried to find an error in the code, but it was too convoluted for me to get an idea. It could be here:
… but it could also be at the place where raw.load_data() is implemented … ?
I have made a short example showing the problem (see example files in attached .zip folder: example_files.zip). It’s weird that for my controlled example, I find the scaling to be 1e7 instead of 1e6 - but I cannot find the reason for that either …
In any case, the point is made and help would be appreciated!
import numpy as np
import mne
# WRITING BVR DATA
# ----------------
# Note: The .vhdr and .vmrk files were edited manually to fit the testing purpose
# Settings
fname_eeg = 'test_unit_scaling.eeg'
fs = 1000 # Hz sampling freq
n_time = 1 # seconds of recording
n_timepts = int(n_time * fs) # timepoints of simulated data
chans = ['Cz', 'Pz', 'Oz'] # channel names in .vhdr file
n_chans = len(chans)
# Simulate some data between -50 and 50 microVolts
data = np.random.randint(-50, 50, (n_chans, n_timepts))
# We need to save in a multiplexed format ... so reshape as follows:
# Preallocate the data
data_multiplexed = np.zeros(data.ravel().shape)
# Make indices into preallocated array for each channel
indices = {}
for chan_i in range(n_chans):
indices[chan_i] = np.arange(chan_i, len(data_multiplexed), n_chans)
# Put channel data in their place
for chan_i in range(n_chans):
data_multiplexed[indices[chan_i]] = data[chan_i, :]
# Make datatype to int16
data_multiplexed = np.asarray(data_multiplexed, dtype='int16')
# Write as binary file
data_multiplexed.tofile(fname_eeg)
# READING DATA WITH MNE
# ---------------------
# One "unscaled" version where scale = 1.
raw_noscale = mne.io.read_raw_brainvision('test_unit_scaling.vhdr')
# One scaled version ... scale = 1e7
raw_scale = mne.io.read_raw_brainvision('test_unit_scaling.vhdr', scale=1e7)
# COMPARE DATA
# ------------
mne_data_noscale = raw_noscale.get_data()[:n_chans,:]
mne_data_scale = raw_scale.get_data()[:n_chans, :]
# If the following does not raise an AssertionError, ...
# mne data is scaled after reading ... by factor 1e7
np.testing.assert_array_almost_equal(mne_data_noscale, data*1e-7)
np.testing.assert_array_almost_equal(mne_data_scale, data)
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (6 by maintainers)
Top GitHub Comments
also, the reason it is 1e7 instead of 1e6 is because the resolution is 0.1uV (10e-1 * 10 e-6).
+1 PR welcome