Navigating the Promise and Pitfalls
As UX professionals, we often reflect on our existing methods and seek new ones in order to improve our research processes and deliver more user-centred designs and experiences. The increasing popularity of generative AI tools has sparked considerable interest across various fields, including UX. A recent study by Dashkevych and Portnov (2024) on the use of generative AI in research offers some insights that could have significant implications for our field.
In this article, we’ll discuss the findings of this study and then explore the potential applications, limitations, and considerations of using AI tools in UX research.
As we’ve seen in previous posts, including my research project from a year ago, UX professionals have been cautiously embracing AI in their work. Dashkevych and Portnov’s study explored how AI tools like ChatGPT, InferKit, and DeepAI could assist in various stages of the research process, from problem definition to literature review and recommendations. For UX researchers and people conducting research in general, this could potentially streamline our workflow in several ways:
-
Literature review and synthesising knowledge: AI tools demonstrated an ability to quickly summarise existing knowledge on a topic, potentially uncovering insights researchers might have missed, and reviewing a larger amount of literature in a shorter period of time. In UX research, this could be particularly valuable when exploring new domains or interdisciplinary areas and conducting secondary research. For instance, when investigating emerging technologies like augmented reality (AR) in retail environments, an AI tool could help synthesise findings from retail, psychology, and human-computer interaction fields, providing a more comprehensive foundation for our research.
-
Gap analysis and research direction: The study also found that AI tools were often able to identify missing or overlooked information. In UX research, this could help us identify blind spots in our research plans or highlight under-explored areas. For example, when planning a usability study for a mobile app, an AI tool might suggest investigating aspects of accessibility that we hadn’t initially considered. This can be particularly helpful in smaller teams or teams of one where researchers cannot get feedback from other team members.
-
Research design and methodology: AI suggestions could potentially help refine our research questions and methodologies, offering alternative perspectives we hadn’t considered. This aligns with findings from other fields; for example, Tshitoyan et al. (2019) demonstrated how machine learning models could uncover scientific knowledge in materials science literature, suggesting new research directions.
-
Data analysis: While not directly explored in Dashkevych and Portnov’s study, the potential for AI to assist in qualitative and quantitative data analysis is an exciting prospect for researchers dealing with large amounts of user data like user interview transcripts or product analytics. Certain popular tools are already incorporating AI features to help with coding and theme identification in qualitative data and quant researchers have been using tools like ChatGPT’s coder.
Even though AI has a lot of potential, the recent study by Dashkevych and Portnov also highlighted several limitations of current AI tools that we need to be aware of:
-
Inconsistency and inaccuracy: The AI-generated responses were often inconsistent and sometimes contained inaccurate information. It is crucial to verify the information and use critical thinking in our research process. As UX researchers, we must be cautious about blindly accepting AI-generated insights without verification.
-
Lack of context and nuance: AI tools often provided generic responses that weren’t always relevant to the specific research context. This aligns with findings from other studies on AI in research, such as Bender et al. (2021), who warn about the limitations of large language models in understanding context and nuance. In UX research, where understanding the subtle contextual factors that influence user behaviour is crucial, this limitation is particularly concerning and limiting.
-
Reference and source issues: The study found that AI tools often generated non-existent or incorrect references. This is a significant issue for maintaining academic rigour in our research and could lead to the spread of misinformation if not carefully checked. This is one of the most frustrating issues I encounter whenever using AI — all models generate a number of fake but realistic looking references or misuses them reaching to conclusions not supported by the literature.
-
Lack of creativity: While AI tools could summarise existing knowledge, they struggled with generating truly novel insights or creative solutions. This limitation is particularly relevant in UX research, where innovative problem-solving is often required to address unique user needs.
-
Ethical considerations: The use of AI in research also raises ethical questions. For instance, how do we ensure the privacy and consent of users whose data might be analysed by AI systems? How do we address potential biases in AI algorithms that could influence research outcomes? These are critical questions that UX researchers must grapple with as we consider incorporating AI into our processes.
Given these promises and pitfalls, how should we approach the use of AI tools in our work? Here are some suggestions:
-
AI as a complementary tool: AI tools suffer from a number of limitations and can be seen as complementary to human researchers, not as replacements. They can be valuable for tasks like initial literature reviews, data processing, and generating preliminary hypotheses. However, human expertise remains crucial for understanding context, empathising with users, and generating creative solutions. AI is just another tool in the researcher’s toolbox!
-
Verification and critical thinking: I feel like I’m stating the obvious but any information or insights generated by AI tools should be thoroughly checked and verified before including in our work. Researchers should maintain a critical mindset and cross-reference AI-generated information with reputable sources. Don’t blindly trust the references or the information AI tools generate.
-
Enhancing, not replacing, existing methods: AI tools have the potential to enhance our existing research methods. For example, they could help process large amounts of user feedback more quickly, allowing us to spend more time on deep analysis and insight generation. However, they cannot replace proven UX research methods like contextual inquiry or usability testing. As most UXers agree, we cannot replace actual research with real users with AI tools.
-
Ethical use: If using AI tools in UX research, it’s important to be transparent about their use and ensure that our user data are secure and safe. Certain tools use user provided data to train their models and can be hacked, resulting in research data being leaked. Always check the terms and conditions of the tools you use and consult your company’s legal department before using an AI tool.
-
Continuous learning: This write-up is accurate as of July 2024 but AI technologies evolve rapidly, so the strengths and limitations might differ massively in a couple of months or years. As UX professionals it is important to stay informed about new developments and continuously reassess how new tools can be integrated into our work.
The integration of AI tools in UX research offers exciting possibilities for enhancing research related tasks, but it also comes with significant challenges and ethical considerations. As UX professionals, we must approach these tools with a balance of curiosity and caution, always prioritising the human element in our research and design processes.
Something a bit different: Are you a coffee drinker? Do you drink decaf? I’d like to know your thoughts on decaf coffee. I’m doing a small study as a side passion project and I’d appreciate any help and shares. Link to the survey