The brain uses language-like integration

Summary: Human brain processes that follow a step-by-step sequence closely match how large-scale models of language transform text. Using electrocardiographic recordings from people listening to the podcast, the researchers found that early brain responses correspond to the early layers of AI, while deeper layers correspond to neural activity in regions such as broca's area.
Discovery challenges traditional views of language that rely on set rules, instead highlighting dynamic, contextual integration. The group also produced rich data linking neural signals with linguistic properties, providing a powerful resource for neuroscience research.
Basic facts
- Shaped Alignment: Early brain responses are tracked early in the AI Model, while deeper layers correspond to later neural activity.
- A summary of the rules: AI content embeddings were found to predict brain activity better than classical language units.
- New source: Researchers have released a large neural-linguistic dataset to advance the neuroscience of language.
Source: The Hebrew University of Jerusalem
In a study published in Natural Communicationresearchers led by Dr. Ariel Goldstein from Hebrew University in collaboration with Dr. Mariano Schain from Princeton University and Ad Models analyzed the text.
Using recordings with electrolToTrogragraphy from participants listening to a podcast in the thirties, the group shows that the brain processes language in a systematic sequence that shows the construction made by large models such as GPT-2 and LLama 2.
The findings of the study
When we listen to someone speak, our brain processes each incoming word through a cascade of neural integration. Goldstein's group found that these changes occur over time in a pattern similar to the layers observed in AI language models.
The early layers of AI track simple aspects of words, while the deeper layers include context, tone, and meaning. Research has found that human brain activity follows a similar progression: early neural responses are aligned with early model layers, and later neural responses are aligned with deeper layers.
This alignment was particularly clear in higher-level language regions such as Broca's area, where higher brain responses occur over time in deeper layers of AI.
According to Dr. Goldstein, “The most surprising thing is that the evolution of the mind that takes place in the moment of the mind is similar to the sequence of changes within the major languages of the language. Even if these systems are structured in a different way, both seem to be crazy in the same step-by-step construction with understanding”
Why is it important
The findings suggest that artificial intelligence is not just a tool for writing. It may also provide a new window into how the human brain works. For decades, scientists have believed that understanding language depends on symbolic rules and is strictly linguistic.
This study challenges that view. Instead, it supports a dynamic and mathematical approach to language, in which meaning emerges through the ongoing processing of content.
The researchers also found that the classical features of languages such as phonemes and morphemes did not predict the real-time activity of the brain and embedding the content of the AI found. This reinforces the idea that the brain contains more fluid-driven information than previously believed.
A new benchmark for neuroscience
To advance the field, the group publicly released a complete dataset of neural recordings paired with linguistic features. This new resource allows scientists around the world to test competing theories of how the brain understands natural language, revolving around computational models that closely resemble human understanding.
Important Questions Answered:
A: The brain transforms spoken language through a series of processes that align with the deeper layers of larger linguistic models.
A: It challenges theories of language based on dominance, suggesting that meaning emerges in a dynamic, context-driven manner similar to modern AI systems.
A: Publicly available electro-Ticography recordings of linguistic objects, enable new tests of competing theories of language.
Editing notes:
- This article was edited by the editor of neuroscience news.
- The journal is fully reviewed.
- Additional context added by our staff.
About this language and AI news
Author: Yearden Mills
Source: The Hebrew University of Jerusalem
Contact: Yearden Mills – The Hebrew University of Jerusalem
Image: This photo is posted in Neuroscience News
Actual research: Open access.
“The temporal structure of the natural language process in the human brain corresponds to the elevation of the positions of the major models of language” by Uri Hasson et al. Natural Communication
-Catshangwa
The temporal structure of natural language processing in the human brain correlates with the programmed domains of large language models
Large-scale models of language (LLMS) provide a framework for understanding language processing in the human brain. Unlike traditional models, LLMSs represent words and context through formal value encoding.
Here, we show that the LLM 'HERSER Hierarchy aligns with the temporal dynamics of language comprehension in the brain.
Using electrocertortography (ecog) data from participants listening to a 30-minute narrative, we show that the deep layers of the LLM correspond to later brain activity, especially in the language-related area.
We extract embedding content from GPT-2 XL and Llama-2 and use linear models to predict neural responses at each time point. Our results reveal a strong connection between model depth and Consour's tealive mental window during comprehension.
We also compare LLM-based predictions with illustrative methods, highlighting the advantages of deep learning models in capturing brain power.
We release our nonaurance-aligned dataset as a public benchmark to test competing theories of linguistic performance.



