This repostiry contains the code requires to train a SVM and LDA models to distinguish between perceived speech and silence from fNIRS signals.
-
Updated
May 6, 2025 - Jupyter Notebook
This repostiry contains the code requires to train a SVM and LDA models to distinguish between perceived speech and silence from fNIRS signals.
A brain-computer interface (BCI) for decoding speech phonemes from EEG signals using a hybrid CNN-LSTM model, with interactive Streamlit visualizations and Docker support.
Add a description, image, and links to the speech-decoding topic page so that developers can more easily learn about it.
To associate your repository with the speech-decoding topic, visit your repo's landing page and select "manage topics."