question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ICA arguments (once again)

See original GitHub issue

There’s been an extensive discussion on the ICA arguments in #4856. As a result, the documentation was improved to clarify the use of the n_components, max_pca_components, and n_pca_components arguments of mne.preprocessing.ICA.

An info box in the current docs now reads:

[For] rank-deficient data such as EEG data after average reference or interpolation, it is recommended to reduce the dimensionality (by 1 for average reference and 1 for each interpolated channel) for optimal ICA performance (see the EEGLAB wiki).

However, after reading that, I didn’t really know which of the three abovementioned parameters to adjust. I eventually figured, based on @cbrnr’s comments in #4856 and by playing around a little, that it’s probably max_pca_components – at least if I want to achieve “EEGLAB-like” behavior. The respecting EEGLAB wiki section reads:

There are thus some cases in which the rank reduction arising from use of average reference is not detected. In this case, the user should reduce manually the number of components decomposed. For example, when using 64 channels enter, in the option edit box, “‘pca’, 63”.

I was wondering if we could make this connection to EEGLAB behavior more explicit, to make the transition from EEGLAB to MNE-Python easier for users. Specifically,

  • state that “reduce the dimensionality” most likely means adjusting max_pca_components (and leaving n_components, n_pca_components at their respective defaults)
  • state that this would resemble the functionality of the pca parameter of EEGLAB’s pop_runica()

Further, I found it interesting that both the EEGLAB and MNE-Python docs only explicitly mention the issue of dimensionality reduction following average referencing (and interpolation), since with the reference-free EEG setup we’re using in our lab (actiCHamp), dimensionality reduction occurs when setting any reference:

data = mne.io.read_raw_brainvision(infile, preload=True)
rank_before_ref = mne.compute_rank(data)['eeg']
data.set_eeg_reference(['Oz'])
rank_after_ref = mne.compute_rank(data)['eeg']

print(f'Rank before referencing: {rank_before_ref}')
print(f'Rank after referencing: {rank_after_ref}')

Produces:

...
Rank before referencing: 64
Rank after referencing: 63

Wonder if that could and should also be mentioned in the docs?

(Also this made it apparent to me that we really should implement #1717 to warn users of potential rank deficiency…)

cc @shirllim @nandmone

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:33 (33 by maintainers)

github_iconTop GitHub Comments

1reaction
hoechenbergercommented, Feb 13, 2020

Yes, I got that, but it does matter when one invokes ICA.apply() 😃

1reaction
cbrnrcommented, Feb 12, 2020

So, we do agree that to get EEGLAB-like behavior as in pop_runica(…, “pca”, 123), users would have to use max_pca_components=123 and leave n_components and n_pca_components alone?

That I’m not sure, I will check tomorrow.

Read more comments on GitHub >

github_iconTop Results From Across the Web

New Intermediate Court of Appeals hears first cases on oral ...
The new ICA heard its first cases on oral arguments on Nov. 10, 2022. (J. Alex Wilson - Supreme Court of Appeals of...
Read more >
mne.preprocessing.ICA — MNE 1.2.2 documentation
If more than one component is specified, explained variance will be calculated jointly across all supplied components. If None (default), uses ...
Read more >
d. Indep. Comp. Analysis - EEGLAB Wiki
Independent Component Analysis (ICA) may be used to remove/subtract artifacts embedded in the data (muscle, eye blinks, or eye movements) without removing the ......
Read more >
Fair Use Policy - International Communication Association
This data is valuable once again in the distribution of scholarly research. Scholars routinely quote copyrighted work in research results that may be ......
Read more >
Makoto's preprocessing pipeline - SCCN
I have another SCCN Wiki page Makoto's useful EEGLAB code which is more coding ... To avoid this problem, one may calculate an...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found