ANI

Why AI talks still sound bogus

Summary: A new study comparing human and AI-generated conversations reveals that major language models such as chatgpt and Claude still fail to mimic natural human dialogue. The researchers found that these programs are highly focused – they imitate their conversation partners, misuse filter words like “well” or “like,” and struggle with natural openness and closure.

This “extreme alignment” gives them an artificial nature, even if their Grammar and logic are flawless. While talk of AI continues to evolve rapidly, researchers say the hidden social aspects of human communication may be out of reach.

Key facts:

  • Excessive simulation: AI models have overestimated human speech patterns, which they say people recognize naturally as unnatural.
  • Commonly Used Fork: Major language models underperform or overuse speech marks such as “SO” or “well,” to skip the dynamic flow.
  • Negative changes: AI often fails to open up nature and ends, losing the social nuances that are always human dialogue.

Source: Mr

It's easy to impress artificial intelligence. Many people use large language models such as chatgpt, Copilot and confusion to help solve various tasks or simply for entertainment purposes.

But how good are these giant linguists at pretending to be human?

Not much, according to a recent study.

“Large language models speak differently than humans,” said Songci> and Professor Lucas Brietti from the Department of Psychology at the Norwegian University of Science and Technology (NTNU).

Brietti was one of the authors of a recently published internal research article The type was kindC. The first author is Eric Mayor from the University of Basel, while the last author is Adrian Bangerter from the University of Neuchâtel.

Several models were tested

The main language models the researchers tested were Cartgpt-4, Claude Sonnet 3.5, VILUUNA and Wayfarer.

  • First, they independently compared transcripts of telephone conversations between individuals with conversations conducted in the major languages.
  • They then tested whether other people could distinguish between human phone conversations and those of language models.

For the most part, people aren't fooled – or at least not yet. So what do linguistic models do wrong?

Excessive imitation

When people talk to each other, there is a certain amount of acting going on. We slightly change our words and conversation according to the other person. However, imitation is often subtle.

“Large language models are not very willing to imitate, and this extreme imitation is something that people can find,” explains Brietti.

This is called 'excessive' alignment.

But that's not all.

Incorrect use of filler words

Movies with bad scripts often have artificial sounding dialogues. In such cases, script writers often forget that dialogues do not just meet the necessary content words. In real, everyday conversations, most of us include little words called 'Speech Markers'.

These are words like 'So', 'well', 'like' and 'however'.

These words have a social function because they can signal interest, belonging, attitude or purpose to another person. In addition, they can also be used to organize a conversation.

The big language models are terrible at using these words.

“Large language models use these small words differently, and incorrectly,” Brietti said.

This helps expose them as non-human. But there is something else.

Features on and off

When you start talking to someone, you may not get straight to the point. Instead, you can start by saying 'hey' or 'by the way, how are you?' or 'oh, fancy to see you here'. People often engage in small talk before moving on to what they want to talk about.

This transition from introduction to business happens more or less automatically for people, without being clearly defined.

“This import, and the change to a new phase of conversation, is also difficult for the large language models to imitate,” said Brietti.

The same applies until the end of the conversation. We usually don't end conversations as soon as the information is passed on to the other person. Instead, we tend to end the conversation with phrases like 'ok, then', ok ', talk to you later'.

Major language models do not handle that part.

Better in the future? Maybe

All in all, these features cause so much trouble in a large language of languages ​​where the end is clear:

“The big models of the big languages ​​still can't imitate people enough to fool us consistently,” Brietti said.

Developments in this field are now progressing so fast that major language models will soon be able to do this – at least if we want them to. Or will they?

“The development of large-scale linguistic models will be able to reduce the gap between human conversations and artificial processes, but fundamental differences are likely to persist,” concludes Brietti.

At the time, the big models of language are still not hard enough to fool us. At least not all the time.

Important Questions Answered:

Q: Why do AI conversations feel so natural?

A: Ai often over-imitates him and loses the subtleties of conversation – especially in timing, bonding and social rhythms – that make people's speech flow naturally.

Q: What errors are given remotely?

A: Incorrect use of filler words, poor transitions, and over-arrangement of words make the AI ​​sound robotic or textual.

Q: Will AI always sound fully human?

A: Possibly, but the researchers suggest important differences in empathy, time and social purpose can always separate people from machines.

About This AI News

Author: Nancy Bazilchuk
Source: Mr
Contact: Nancy Bazilchuk – Mr
Image: This photo is posted in Neuroscience News

Actual research: Open access.
“Can large-scale language models simulate human spoken conversations?” by Lucas Bietti et al. Psychological Science


-Catshangwa

Can large-scale language models mimic human spoken conversations?

Large-scale linguistic models (LLMS) can simulate many aspects of human recognition and are organized as potential passApps.

They are good at risk-based conversation, but little is known about their ability to imitate spoken conversation. We investigate whether LLMS can simulate human spoken conversation.

In Study 1, we compared human telephone conversation texts from the switchboard (SB) Corpus to a Copract of six texts produced by two powerful LLMS, VPT-REAUDE LLNET 3.5, and Wayfar, using various Products designed to teach SB commands.

We compared LLM and SB dialogues in terms of alignment (conceptual, syntactic, and lexical), communication scores, and opening and closing communication.

We also documented the accreturative features in which LLM interviews differed from SB interviews.

In Experiment 2, we tested whether people could distinguish LLMS-generated texts from SB conversations. LLM conversations showed extreme alignment (and increased alignment as conversations occurred) relative to human conversations, unique and often inappropriate use of communication cues, and did not differ between open and closed human conversations.

LLM interviews did not go beyond SB interviews. The spoken conversations produced by LLMS differ in reality and diversity from those of people.

This issue may arise with better LLMs and more training in spoken communication, but it may be due to a fundamental difference between spoken and conversational communication.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button