From Brain to Text
NEST uses a transformer-based architecture to decode EEG signals
into natural language. Here's an end-to-end overview of our pipeline.
Architecture Overview
The complete NEST pipeline in 4 steps
EEG Input
105-channel EEG signals captured during reading tasks
Preprocessing
Filtering, normalization, and feature extraction
Transformer
Encoder-decoder attention for sequence modeling
Text Output
Natural language text decoded from brain signals
Model Architecture
EEG Encoder
6 Transformer Layers
512 hidden dimensions, 8 attention heads. Processes raw EEG signals into latent representations.
Cross-Attention
8 Attention Heads
Learns alignment between EEG features and text tokens for accurate decoding.
Text Decoder
6 Transformer Layers
50K vocabulary, autoregressive generation of natural language output.