When using AI, users fall back into the dening-kruger trap

Summary: A new study reveals that when you interact with AI tools like chatgpt, everyone – even the most skilled ones overestimate their performance. The researchers found that the general effect of Kruger disappears, and instead, the users of reading the students show more and more and more about their abilities.
Research shows that relying on AI encourages “cognitive loading,” where users trust the program to exit without having to reflect or look twice. Experts say AI writing knowledge alone is not enough; People need platforms that encourage awareness and critical thinking to see where things might go wrong.
Basic facts
- Back to Kruger: Advanced and advanced users rated their skills more than novices when using chatgpt.
- Understanding loading: Most of the participants rely on one height and reliable AI responses without reflection.
- Gap Gap: Current AI tools fail to help users test their thinking or learn from mistakes.
Source: Aalto University
When it comes to measuring how good we are at something, research consistently shows that we tend to rate better than average. This tendency is stronger in people who perform low on psychological tests.
It is known as the Denning-Kruger Effect (DKE) – The worst people are the ones where they are more inclined to underestimate the power of their abilities and “ingenuity”, the less they see their true potential.
However, a study led by the university reveals that when it comes to AI, specifically, large-scale linguistic models (LLMS), the researchers found that all users show that all users show an inability to accurately assess their accuracy when using chatgpt. In fact, across the board, people put their performance on the line.
On top of this, the researchers point to the reversal of the denning-kruger effect – the apparent tendency of those users who consider themselves to be learning their skills and think they were really passing.
'We found out that when it comes to Ayi, Dkemahala is gone. In fact, what's really surprising is that AI's high literacy has brought overconfidence,' says Professor Robin Wesch. 'We expected that literate people would not only be better at interacting with AI systems, but also when judging their performance with these systems – but this was not the case.'
The findings add to a rapidly growing volume of research showing that blindly trusting AI comes with risks such as 'surprise' the ability to find reliable power and even performance skills. While people do better when they use chatgpt, it's about whether they all put that functionality into practice.
'AI Literacy is really important these days, so this is a very positive impact. The study of AI can be very technical, and it does not really help people to actually interact with the fruits and programs of AI', echelchch.
'Current AI tools are inadequate. They do not increase with metacotion [awareness of one’s own thought processes] And we don't learn from our mistakes,' adds medical researcher Daniela da Silva Fernandes. 'We need to create platforms that encourage our reflection process.'
This article was published on October 27 in the journal Computers in human behavior.
Why one development is not enough
The researchers designed two experiments in which about 500 participants used AI to complete logical reasoning tasks from the US's Law School Admission Test (LSAT). Half of the team used AI and half. After each task, the subjects were asked to note how it was done – and if they did it correctly, they were promised more compensation.
'These tasks take a lot of effort to understand. Now that people use AI every day, it's normal that you're going to give something like this to AI to solve, because it's such a challenge', said Welch.
The data revealed that the majority of users rarely go up through chatgpt for more than one query. Usually, they just fake a question, put it in an AI system, and be happy with the AI solution without any second-guessing or checking.
'We were looking for them to be really exposed to the AI system and they found that people just think that AI will solve things. Often there was only one interaction to get results, meaning users blindly relied on the system. That's what we call cognitive loading, where all the processing is done by AI', Welchch explains.
These shallow measures of engagement may be limited by the scales required to measure self-efficacy and allow for accurate self-evaluation. Therefore, it can be seen that encouraging or testing by seeking more studies can provide better loops, improve the medicution of users, he said.
So what is the practical solution for everyday AI users?
'AI can ask users if they can explain their thinking further. This will force the user to interact more with AI, face their own illusions, and develop critical thinking,' said Fernandes.
Important Questions Answered:
A: Instead of less skilled users showing overconfidence, AI users actually overused it – they returned the result.
A: It highlights how blind trust in AI can crowd out critical thinking and suggests the need for tools that encourage users to reflect and think critically.
A: Everyone, no matter how much knowledge of AI, pointed out its effectiveness, showing that even experienced users cannot accurately judge its success.
About This AI News
Author: Sarah Hudson
Source: Aalto University
Contact: Sarah Hudson – Aalto University
Image: This photo is posted in Neuroscience News
Actual research: Open access.
“AI is making you smart but nobody's smart: cutting the line between performance and metacoction” by robin welsch et al. Computers in human behavior
-Catshangwa
AI makes you smarter but no one is smarter: the intersection between performance and metaconcotion
Preparing for human-ai interaction requires users to think about their performance critically, but little is known about the production of AI tasks in the users' design.
In two large studies, we investigate how the use of AI is associated with appropriate meditative monitoring and users' performance in logical thinking tasks. Specifically, our paper examines whether humans using AI to perform comprehensive tasks can accurately monitor how well they perform.
In Study 1, participants (n = 246) used AI to solve 20 logical reasoning problems from a law school exam.
While their job performance improved by three points compared to the average population, the participants increased their job performance by four points. Interestingly, higher airicy of AI corresponds to lower metacognitive accuracy, suggesting that those with AI technical knowledge were more reliable but less accurate in judging their performance.
Using an integration model, we examined individual differences in cognitive accuracy and found that the dening-kruger effect, often seen in this work, ceased to exist with the use of AI. Study 2 (n = 452) replicated these findings.
We discuss how AI levels align with you in cognitive communication and meditative in Human-ai communication and consider the bankruptcy results of effective AI applications that promote accurate self-evaluation, avoid excesses, and improve cognitive performance.



