Code for "MEG-XL: Data-Efficient Brain-to-Text via Long-Context Pre-Training" at ICML 2026
-
Updated
May 12, 2026 - Python
Code for "MEG-XL: Data-Efficient Brain-to-Text via Long-Context Pre-Training" at ICML 2026
A brain-computer interface (BCI) for decoding speech phonemes from EEG signals using a hybrid CNN-LSTM model, with interactive Streamlit visualizations and Docker support.
This repostiry contains the code requires to train a SVM and LDA models to distinguish between perceived speech and silence from fNIRS signals.
Consumer brain-computer interface for inner speech decoding. 8-channel EEG headband ($800) outperforms 128-channel clinical systems ($50K). EEGNet 35.5% accuracy (p=0.0006), cross-subject generalization (p=0.003). Real-time demo included.
Add a description, image, and links to the speech-decoding topic page so that developers can more easily learn about it.
To associate your repository with the speech-decoding topic, visit your repo's landing page and select "manage topics."