Faster AI Is Not Always Perceived As Better

Summary: In the world of technology, procrastination is often the enemy. However, provocative new research suggests that the “need for speed” in AI may be reversing user perception.
By testing 240 participants with various response delays (from 2 to 20 seconds), the researchers found that users quickly rated the AI as intuitive and easy to use. This reveals that people are using social “thinking” indicators on possible AI, viewing the short break not as a technological delay, but as evidence of care and conversation.
Important Facts
- The Perception Gap: Participants who waited 9 or 20 seconds for a response rated the AI more positively than those who received a faster response (2 seconds), even when the actual content was the same.
- Anthropomorphizing Latency: Users interpreted the break of AI as a “thinking” machine, which expresses the conversational habits of people, where a quick response can seem irrational and a limited delay suggests reflection, in software.
- Behavior versus perception: Surprisingly, the pace did not change How people use AI. Frequency of interaction and appreciation remained consistent regardless of delay; the only thing that changed was the user's independent perception of AI intelligence.
- Task-Driven Collaboration: I type of determined work behavior over speed. Creative activities (brainstorming) led to iterations going back and forth, while Advisory services (exploration) led to fewer, more focused exchanges.
Source: NYU
In the race to make AI models that not only think better but respond faster, latency – the delay before a response emerges – is often treated as a technical barrier, something to limit and bypass. But how does this constant pressure for speed affect the people who use these systems every day?
There is a rich body of human computing interfaces that combine faster response times with better usability. But AI models are very different from the deterministic systems on which previous research has been built. When you wait for a file to download or a page to load, the result is fixed and predictable.
AI models are random – you can't expect an exact answer. A conversational interface means that users naturally learn social cues from interactions. Pausing can be read as an AI's “thinking”, for example. Users are increasingly being asked to choose between fast and slow, intuitive models, without guidance on what that choice actually means for their experience.
Recent research presented at CHI'26 examined how response time shapes how people use and evaluate AI systems. Felicia Fang-Yi Tan and Technology Management and Innovation Professor Oded Nov recruited 240 participants and asked them to complete general information tasks using a chatbot. Some activities focus on creativity, such as brainstorming ideas or writing a text.
Others focus on advice, such as evaluating decisions or providing recommendations. Most importantly, the system is designed to respond at different speeds. Some participants received answers after just two seconds, while others waited nine or twenty seconds.
The results challenge the long-held assumption in human-computer interaction that faster is always better.
“People think that faster AI is better, but our findings show that time actually shapes how intelligence is perceived,” Tan said. “A short pause can show care and deliberation, making the same response sound thoughtful and useful, even when nothing has changed about the underlying AI model.”
Surprisingly, how quickly the AI responded did not significantly change human behavior (eg, frequency of prompts, copy-pasting). Participants reported more and interacted with the system in very similar ways whether they waited two or twenty seconds. Rather, behavior depended more on the type of work.
Participants attempting creative tasks (including producing new content such as writing) are encouraged more back and forth, with users refining and iterating on ideas. Advisory activities (which involve providing guidance, criticism, or evaluation) lead to fewer, more focused exchanges.
When the important moment was visible. Participants who received two-second responses rated the AI responses as less thoughtful and less helpful. Conversely, those who experienced longer delays tended to view the same types of responses more favorably. Many have interpreted the suspension as a sign that the system is “thinking,” showing great care and consideration in its output.
This result highlights a subtle but powerful aspect of human psychology. In everyday conversation, pauses have meaning. A quick response may sound irrational, while a moderate delay suggests reflection. People seem to apply these same social expectations to machines, even if they know they are interacting with software.
The implications extend beyond the user experience. Given that latency is an inherent feature of today's AI models, perhaps the most productive question is not how to eliminate it, but what it is designed to do.
Positive conflict is a deliberate deceleration designed to promote psychological benefits such as reflection. Rather than treat every millisecond of waiting as a waste, designers might ask: what can this pause do?
The study also reveals important ethical implications. If people equate long response times with high quality, they may place undue trust in slower systems, regardless of whether the output is actually better.
This raises ethical questions about whether AI systems should be designed to manage time in ways that shape users' perception. And if so, whether users should be notified if they are.
Important Questions Answered:
A: Introduce the concept of “good conflict.” While speed is good for efficiency, slowing down on purpose can encourage reflection and confidence. However, research warns that this can be used fraudulently, creating a subpar model it seems cleverly by simply adding a “thinking” delay.
A: Because AI is conversational and actionable, we naturally use the same “mental models” we use for humans. In a human conversation, a 0.5 second answer to a difficult question sounds easy, while a 5 second pause suggests that the person is really thinking about their answer.
A: That's not the case. Research shows feeling useful is the psychological effect of waiting. If you need raw speed for repetitive work, the faster model is better. If you're looking for a “partner” in a complex advisory role, your brain may naturally favor the popularity of a slower, more thoughtful system.
Editor's Notes:
- This article was edited by a Neuroscience News editor.
- The journal paper is fully revised.
- Additional content added by our staff.
About this AI research news
Author: Leah Schmerl
Source: NYU
Contact person: Leah Schmerl – NYU
Image: Image posted in Neuroscience News
Actual research: The findings were presented at CHI'26



