SNIRF loader not working without the optional sourcePos3D
See original GitHub issueDescribe the bug
The function read_raw_snirf is not working if the optional field sourcePos3D does not exist in the SNIRF file.
Steps to reproduce
Given a nback.snirf SNIRF file with only sourcePos2D and no sourcePos3D:
from mne.io import read_raw_snirf
snirf_intensity = read_raw_snirf('nback.snirf')
Expected results
Loading nback.snirf
Actual results
Loading nback.snirf
Traceback (most recent call last):
File "...\snirf.py", line 3, in <module>
snirf_intensity = read_raw_snirf('nback.snirf')
File "...\mne-python\mne\io\snirf\_snirf.py", line 43, in read_raw_snirf
return RawSNIRF(fname, preload, verbose)
File "<decorator-gen-248>", line 24, in __init__
File "...\mne-python\mne\io\snirf\_snirf.py", line 119, in __init__
assert len(sources) == srcPos3D.shape[0]
IndexError: tuple index out of range
Additional information
This may be complex to solve as the beer_lambert_law function requires the 3D positions of optodes.
Issue Analytics
- State:
- Created 2 years ago
- Comments:26 (21 by maintainers)
Top Results From Across the Web
editing .snirf files – Homer3 and AtlasViewer – openfnirs Forum
After loading a snirf file, I'd like to add coordinates to snirf1.probe.sourcePos3D and snirf1.probe.sourceDet3D from a csv file. The ...
Read more >BUNPC - github record :)
Fix error loading SNIRF aux.name field in AuxClass because of changes to MATLAB's HDF5 library. (It seems for string there is no need...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@Horschig @HanBnrd @larsoner
To ensure we support Artinis created SNIRF files we would need some test data in the testing repository. We would like a short (10-15s) recording exactly as created by your software, not edited afterwards (e.g.crop as this might not be faithful to how your software creates the file).
We also try to keep a record of the settings used to take the measurement so that we can include additional tests in our code. And it helps to understand when someone else brings a file that doesn’t behave well.
If the data has been converted after collection as I think happens here. Please include a copy of the script used to convert the data. So that we could reproduce the steps if required.
Some examples of PRs for adding files…
https://github.com/mne-tools/mne-testing-data/pull/51
https://github.com/mne-tools/mne-testing-data/pull/70
https://github.com/mne-tools/mne-testing-data/pull/57
It would be great to get an Artinis file in the test battery so we could claim comparability. I look forward to a PR.
In MNE in general we assume that 3D locations are known, so I expect only having 2D to create all sorts of issues down the line. For example, when people will follow our fNIRS tutorials, they’ll do a
plot_alignment
with their own data to sanity check things in 3D, and they’ll see a flat disc of electrodes over the head (or worse and more likely, through it) and be rightly confused. The 2D plotting functions expect 3D locations when they do their own flattening. Any future physical modeling will also be wrong. There are probably other cases, too.All of these problems can be solved if we can transform the electrode locations from 2D to 3D. This is what we do with template EEG positions that only give azimuth and elevation. I’d prefer to figure out a suitable way to do this in the least painful way for fNIRS electrodes if possible, rather than change our code to accommodate 2D positions.
@kdarti what is the transformation that takes the 3D locations and transforms them to 2D? Ideally I think we would just invert this to get 3D locations back.
If this transformation is unknown and all that matters is the distance, there is probably some 2D-plane-to-3D-sphere (or sphere-ish) transform that preserves distances as much as possible, and we should just use that. If we’re willing to accept that the 2D approximation for 3D positions is acceptably “close enough” for MBLL, then presumably we can get the 2D back to 3D transformation to be similarly “close enough”.