Generative AI User Understanding | About Data Science

in some interesting discussions recently about designing LLM-based tools for end users, and one of the key product design questions this raises is “what do people know about AI?” This is important because, as any product designer will tell you, you need to understand the user in order to successfully build something for them to use. Imagine if you were building a website and you thought all visitors would be fluent in Mandarin, so you wrote the site in that language, but then it turns out that all your users speak Spanish. Of course, because while your site may be amazing, you built it on faulty assumptions and made it less likely to succeed because of it.
So, when we build LLM-based tools for users, we have to go back and look at how those users think about LLMs. For example:
- They may not know anything about how LLMs work
- They may not realize that there are LLMs that support the tools they already use
- They may have unrealistic expectations of the LLM's skills, due to their experience with strongly embedded agents.
- They may have a feeling of mistrust or aversion to LLM technology
- They may have varying degrees of trust or confidence in what the LLM says based on past experience
- They may expect decisive results although LLMs do not offer that
User research is an incredibly important part of product design, and I think it's a real mistake to skip that step when we're building LLM-based tools. We cannot assume that we know how our particular audience has experienced LLMs in the past, nor can we assume that our experiences represent theirs.
User Profiles
There does happen to be good research on this topic that can help guide us, fortunately. Other archetypes of user perspectives can be found in the Four-Persona Framework developed by Cassandra Jones-VanMieghem, Amanda Papandreou, and Levi Dolan at the Indiana University School of Medicine.
They propose (in a medical context, but I think it's general) these four categories:
Unconscious User (I Don't Know/I Don't Care)
- A user who doesn't really think about AI and doesn't see it as important in their life would fall into this category. Naturally they would have a limited understanding of the basic technology and would not be curious to know more.
Avoidant User (AI is Dangerous)
- This user has a negative view of AI and can come to a solution with great skepticism and mistrust. For this user, any AI product offering can have a very negative impact on brand relationships.
AI Enthusiast (AI Is Always Beneficial)
- This user has high expectations for AI – they are enthusiastic about AI but their expectations may not be met. Users who expect AI to take all the worries or be able to answer any question with perfect accuracy can log in here.
Experienced AI User (Powered)
- This user is realistic, and likely has a generally high level of literacy. They may use a “trust but verify” strategy when citations and proof of assertions from the LLM are important to them. As the authors point out, this user only invokes AI when it is useful for a particular task.
Building on this framework, I would argue that overly optimistic and overly pessimistic views are both often based on a specific lack of knowledge about the technology, but they do not represent the same type of users at all. The combination of the level of knowledge and experience (both the power and the quality nature) together form the user profile. My interpretation is slightly different from what the authors say, which is that the Enthusiasts have good information, because in fact I would argue that unreasonable expectations of AI abilities are often based on a lack of knowledge or an unbalanced use of knowledge.
This gives us a lot to think about when it comes to designing new LLM solutions. Sometimes, product developers can fall into the trap of assuming that the level of knowledge is the only axis, and forget that social feelings about this technology are very different and can have a great impact on how the user finds and experiences these products.
Why This Happened
It is good to think a little about the reasons for this wide range of user profiles, and the feeling in particular. Many other technologies we use regularly do not encourage such polarization. LLMs and other generative AIs are new to us, so that's part of the issue, but there are qualitative aspects of generative AI that are very different and can influence how people react.
Pinski and Benlian have some interesting work on this subject, noting that key aspects of productive AI can disrupt the ways human computer interaction researchers expect these relationships to work – I highly recommend reading their article.
Nondeterminism
As computing has become part of our daily lives over the past decades, we have been able to rely on a certain amount of reproducibility. When you click a key or press a button, the response from the computer will always be the same, more or less. This provides a sense of reliability, where we know that if we learn the right patterns to achieve our goals we can trust those patterns to be consistent. Generative AI violates this contract, due to the indeterminate nature of results. The average user of the technology has little experience with the concept of the same keystroke or request returning unpredictable and often different results, and this understandably violates the trust they may have. Nondeterminism is a very good reason, yes, and once you understand the technology this is just another quality of technology to work with, but in the small information phase it can be a problem.
Illegibility
This is just another term for “black box”, really. The nature of the neural networks underlying generative AI is such that even those of us who work directly with the technology lack the ability to fully explain why the model “does what it does”. We cannot combine and define each neuron weight in every layer of the network, because it is too complex and has too many variables. Of course there are many AI solutions that can be defined that can help us understand the levers that make an impact on a single prediction, but the broad definition of the functionality of this technology is not true. This means that we have to accept a certain level of ignorance, which can be very difficult for scientists and curious lay people to accept.
Independence
The growing push to make productive AI part of semi-autonomous agents seems to be driving these tools to operate with less supervision, and less control by human users. In some cases, this can be very helpful, but it can also cause anxiety. Given what we already know about these tools as largely indeterminate and undefined, independence can feel dangerous. If we don't always know what the model is going to do, and don't fully understand why it does what it does, some users might be forgiven for saying that this doesn't sound like a safe technology to allow unsupervised operation. We are always working to improve testing and testing techniques to try and prevent unwanted behavior, but a certain amount of risk is unavoidable, as with any potential technology. On the other hand, some of the autonomy of productive AI can create situations where users are not aware of the AI's involvement in a particular task. It can work silently behind the scenes, and the user will not be aware of its presence. This is part of a much larger area of concern where AI output is invisible to human-made objects.
What does this mean for the product
This doesn't mean that building products and tools that include AI for manufacturing aren't starting, of course. It means, as I often say, that we have to carefully consider whether productive AI is appropriate for the problem or task in front of us, and make sure we consider the potential risks and rewards. This is always the first step — make sure AI is the right choice and that you're willing to accept the risks associated with using it.
After that, here's what I recommend to product designers:
- Conduct extensive user research. Find out what the distribution of the user profiles described above is in your user environment, and plan how the product you create will receive them. If you have a significant portion of Avoidant users, plan an information strategy to smooth the path of acquisition, and consider releasing things slowly to avoid shock to the user base. On the other hand, if you have a lot of Enthusiast users, make sure you are clear about the performance parameters your tool will provide, so you don't get your “pulling AI” reaction. If people expect magical results from productive AI and you can't provide that, because there are significant safety, security, and performance limitations you must adhere to, then this will be a problem for your user experience.
- Build your users: This may sound obvious, but I'm actually saying that your user research should not only deeply influence the look and feel of your AI productivity product but the actual design and functionality of it. You should come to engineering jobs with an evidence-based idea of what the product needs to be able to do and the different ways your users might experience it.
- Prioritize education. As I mentioned, educating your users about any solution you offer will be important, whether they are positive or negative. Sometimes we think that people will “just get it” and we can skip this step, but we are wrong. You should set realistic expectations and answer questions ahead of time that may arise from a skeptical audience to ensure a good user experience.
- Don't force it. Lately we're finding that software products we've happily used in the past are adding productive AI functionality and making it mandatory. I've written before about how market forces and patterns in the AI industry do this, but that doesn't make it any less damaging. You have to fit a certain group of users, however small, to want to refuse to use a productive AI tool. This may be due to sensitive feelings, or security controls, or lack of interest, but respecting this is the right decision to preserve and protect the good name of your organization and the relationship with that user. If your solution is useful, relevant, well-tested, and well-connected, you may be able to increase the use of the tool over time, but forcing people won't help.
The conclusion
When it comes down to it, many of these courses are good advice for all types of technical product design work. However, I want to emphasize how much generative AI is changing in terms of how users interact with technology, and the huge shift it represents in our expectations. Because of that, it's more important than ever that we really look at the user and their starting point, before we launch products like this into the world. As many organizations and companies learn the hard way, a new product is an opportunity to make an impression, but that impression can be just as bad as it is good. Your chances of impressing are important, but so are your chances of ruining your relationships with users, crushing their trust in you, and setting yourself up for a major damage control job. So, be careful and careful at first! I wish you luck!
Read more about my work at www.stephaniekirmer.com.
Continuous Learning



