ANI

Brain has a dance mode, and the AI ​​has already wet it

Summary: A team of researchers used a large dance video dataset and advanced AI models to map how the human brain interprets dance, revealing a fatal difference between experts and noxperts. By pairing FMRI records with AI-based Cross-Modal features, they found that higher order brain regions are active with simple or meaningful movements when performing choreography.

The gatekeepers show varied and non-aural response patterns, suggesting that technology expands – rather than suppresses – visual richness. The discovery highlights the deep parallels between predictive AI models and human cognition, pointing to new ways of learning the perception of art that senses.

Key facts:

  • Encoding recommendation: The combined features of music and movement predicted neural activity better than movement alone or sounds alone.
  • Brain experts answer differently: Dance experts show different and more fragmented brain patterns when observing performance.
  • AI mirrors human prediction: The AI ​​Choreography Model predictions closely match how the brain integrates visual information.

Source: The University of Tokyo

Dance is a form of cultural expression that has endured throughout human history, orchestrating a response that manifests itself in the recognition of sound and rhythm.

A team at the University of Tokyo and participants showed different FMRI activity patterns in the brain related to the audience's specific level of dance expertise.

The discovery was born with the latest breakthrough in Dance Motion-Capture Captasets and AI (Artificial Intelligence) generating models, facilitating the Cross-Modal study that shows the complexity of the art form.

Previous studies of dance have often been limited to artistically controlled movement or isolation music, or coarse binary interpretations from isolated sections. The ability to find a complete, opposite connection to the real world in the local brain event allowed to capture the curved, large-scale relationship in dance.

This research project, led by Professor Hiroshi Imamizu of the University of Tokyo, Assoper Professor yu Takagi of the Nagoya Institute of Technology and their team, built inciting encoding in ACCONITION to compare brain responses to compare brain responses to compare brain responses to compare brain responses to compare brain responses to think about brain responses to continue brain responses to compare brain responses to compare responses of the brain to protect the brain.

“In our research we were striving to understand how the human brain directs body movements. As an example of everyday life, dance provided a good subject,” said Imamizu. “Our team had a great respect for genres like street dance and ballet, and working with street dance experts, the research quickly took on a life of its own.”

According to the group, the biggest problem so far is that in order to identify and respond to the abundance of stimuli in the real world, people must process a wealth of protuppal information.

“This is where the release of the Aist Dance Video Database was a stroke of luck. It has over 13,000 recordings covering 10 styles of street dance,” said Imamizu. “It also led to an AI model that produces choreography in music. It almost felt like our research was being driven by this new age of technology itself.”

In describing the research, the researchers said that one of the underlying problems they would like to solve is to understand how the brain and AI interact with each other. Can AI models represent the human mind? And on the other hand, can brain functions be used to understand the inner workings of AI?

To answer this, the team recruited 14 participants of mixed dance backgrounds and looked at their brain responses while watching 1,163 dance pieces of different dancers and styles.

“By linking the choreographing ai to FMRI, or functional magnetic resonance imaging, a technique that can visualize the active regions of the brain by seeing where the brain binds music and movement,” said TakAgi.

“Cross-Modal features predict tasks in higher order domains better than Motion-only or Sound-only features – evidence that the integration of different sensory modalities such as vision.”

The findings also suggested that the design of the following clothing of the following model aligns well with human understanding, revealed by the similarity between how the details of natural and artificial systems work and include visual details.

In addition, to identify how aspects of dancing map onto brain responses and emotional experiences, the team created a list of concepts experienced by dancers of many currencies.

The results of the response from the Internet survey were analyzed with the Simulator of Brain-Service Struct that they have developed, it showed that the different expressions correspond to the patterns of neeving and the trivial and emotional was not reduced to the size of one measure.

“Surprisingly, compared to the noxpert audience, our brain-activity simulator was able to accurately predict the responses of the experts.

“In other words, the results suggest that brain responses are distorted rather than integrated with technology. This has very interesting implications for understanding the relationship between experience and the vitality of artistic expression.

“We believe that the freedom shown to link tightly controlled methods with large-scale, real-world methods opens up new dimensions of research opportunities.”

For members of the group that entered the group, the results ran full circle.

“We would love nothing more than to see our advanced brain simulator used as a tool to create new dance styles that move people. We are very keen to explore applications in other art forms as well,” said Imamizu.

Important Questions Answered:

Q: How does the brain process work differently for experts vs Noxperts?

A: Professionals show more varied and well-organized patterns when watching dance, showing rich visual and emotional representations.

Q: What regions of the brain synchronize music and movement during relaxation?

A: Areas of higher Order Association include audio and visual areas, external movement or music processing only.

Q: How is AI helping to shape how people perceive dance?

A: An AI-generated choreography linked to FMRI decoding reveals how prediction-based cues combine human visual cues.

Editing notes:

  • This article was edited by the editor of neuroscience news.
  • The journal is fully reviewed.
  • Additional context added by our staff.

About this ai, neuroscience, and Dance Research News

Author: Rohan Mehra
Source: The University of Tokyo
Contact: Rohan Mehra – University of Tokyo
Image: This photo is posted in Neuroscience News

Actual research: Open access.
“Eminent deep neural models reveal cortical representations of dance” by Hiroshi Imamizu et al. Natural Communication


-Catshangwa

Prominent deep neural models reveal cortical representations of dance

Dance is an ancient, universal art form that has been practiced throughout human history.

Although it provides a window into cognition, emotion, and cross-modal functioning, fine-grained accounts of how their various details are represented in the brain are rarely done.

Here, we report features from a deep-dancing deep-dancing model of magnetic resonance imaging responses to simulations while participants watched natural dance clips.

We show that Cross-Modal features describe mature brain activity better than low-level motion and noise features.

Using action models such as these simulators, we allow how dances that experience different emotions elicit different neural patterns.

While the brain activity of expert dancers is more defined by dance material than that of novices, experts show greater variability.

Our cross-linking approach draws from generative models in environmental neuuroMaging, which clarifies that movement, music and technology are united by aesthetic and collective knowledge.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button