Posted by mhb on 2025-12-03 04:07:31 |
Share: Facebook | Twitter | Whatsapp | Linkedin Visits: 60
The human brain constantly adapts to focus on task-relevant information. This study explores how different brain regions stretch neural representations depending on whether a task requires attention to color or motion. Using data from monkeys performing a flexible decision-making task and a deep learning model trained on the same input, the researchers show that both biological and artificial systems reshape internal representations to optimize performance. This sheds light on how attention, learning, and neural coding work at a systems level.
Subjects: Two rhesus monkeys performing color-vs-motion decisions.
Recordings: Multi-site spiking activity from PFC, FEF, LIP, IT, MT, V4.
Analysis tool: Representational Similarity Analysis (RSA).
Model: CNN-LSTM trained on the exact same stimuli (same sequence of images).
Metrics: Spike timing (ISI, SPIKE), rate coding, model-based attention weights.
Goal: Compare neural representations with model representations to see how attention changes internal structure.
Spike timing–based measures (especially ISI distance) match the intended stimulus structure better than rate coding.
All brain regions show greater dissimilarity between stimuli that differ on the task-relevant dimension (color or motion).
Strong in: PFC, FEF, LIP
Moderate but still present: V4 (color), MT (motion).
The CNN-LSTM also stretches its internal representations along task-relevant dimensions even without explicit attention mechanisms.
The LSTM can completely reconfigure itself.
Biological areas (especially MT, V4) remain partially modality-bound.
This study shows that both brains and deep learning systems use adaptive stretching as a natural strategy to enhance task-relevant distinctions. The findings highlight:
The importance of spike timing in cognitive tasks.
That top-down control may emerge naturally from error-driven learning.
The brain’s mixture of flexibility (PFC) and specialization (V4, MT).
A deep neural network can mimic these behaviors without explicit attention modules.
The research demonstrates that adaptive stretching of task-relevant information is a fundamental mechanism shared by biological neural circuits and deep learning networks. The brain dynamically reconfigures representations to optimize performance, and deep models learn similar strategies. This bridges neuroscience and AI, offering new insights into attention, learning, and neural coding.