Artificial intelligence is transforming healthcare, from diagnostics to treatment recommendations, promising efficiency, scalability, and improved patient experiences. However, when it comes to user experience (UX) in healthcare, at Pocketworks, we’re increasingly seeing AI treated as an infallible solution, which is worrying.
UX in healthcare is complex because patients, caregivers, and healthcare professionals interact unpredictably with digital tools.
To ensure we are designing the right products and services that effectively meet the needs of all audiences, we have to gather robust data that gathers a wide range of opinions and experiences.
The challenge is not just understanding what users do, but why they do it. Despite its sophistication, AI struggles with this critical aspect.
Take the example of one of our clients, Carbs & Cals, a diabetes management app.
As part of our ongoing product and roadmap development, we’ve recently been exploring new features that will encourage users to log and track their food intake.
We wanted to test the difference between AI-generated predictions and researching with actual users. To do this, we built and trained a custom GPT on our key target audience profile.
We asked the AI model to predict what percentage of users would actively engage with logging food they have consumed. The model predicted that 29% would, when we know from our product analytics that only 18% of this core audience do.
This gap was significant enough to reveal how AI models can often misread behavioural intent.
AI is highly effective at recognising patterns in large datasets, making it invaluable for tasks such as processing medical records, or predicting disease progression. However, its limitations become more apparent when it comes to human behaviour.
One of the core issues is the training data. Many AI models are built using historical datasets that fail to represent diverse populations. For example, women have been significantly underrepresented in clinical trials for decades. Research shows that only 33% of participants in cardiovascular trials are female, despite heart disease being a leading cause of death among women.
When AI models misinterpret user behaviour, they reinforce biases and drive flawed product decisions that could have serious consequences for patient care.
Gartner predicts that by 2026, AI models from organisations that operationalise AI transparency, trust, and security will achieve a 50% improvement in terms of adoption, business goals, and user acceptance.
This doesn’t mean AI has no place in UX research. It can be a valuable tool for drafting surveys, identifying common themes, and analysing large datasets. It speeds up the research process and can highlight patterns that might otherwise go unnoticed. However, AI should never replace direct user research.
In healthcare, user behaviour is often influenced by deeply personal factors that AI cannot predict. A person may avoid logging food because of guilt or anxiety around eating habits, not because they dislike the idea. AI is unable to interpret these emotions accurately.
AI often struggles with sentiment analysis, frequently misreading sarcasm. A user might say, “This app is great at suggesting the wrong foods”, and an AI might register that as positive feedback (this actually happened to us when evaluating the sentiment of an app review). A human researcher would instantly recognise the sarcasm.
Flawed AI models also present ethical and sustainability concerns. A product built on incorrect assumptions will eventually require rework. That means collecting new data, retraining models, and revising entire product strategies. The financial and environmental impact of this is concerning.
Training large AI models consumes enormous amounts of energy. Some AI systems require computational power equivalent to the lifetime emissions of five cars. Rebuilding models due to bias or flaws increases costs, making AI inefficiencies both a sustainability and ethical issue.
Responsible AI development minimises waste and ensures effective technology from the start.
Effective healthcare UX requires AI and human insight to work together, with AI as a tool, not a decision-maker. Key principles for AI-supported UX research:
AI is a game-changer in healthcare, but it’s not a silver bullet. It has the potential to streamline research, identify trends, and process vast amounts of data. But it cannot replace real user insights. When AI predictions misalign with actual behaviour, the consequences go beyond engagement metrics. They affect patient outcomes.
The best approach is a balanced one. AI should enhance, not replace, human research. By combining AI’s efficiency with human expertise, we can design healthcare experiences that are not only smart but truly user-centred.
Technology is advancing rapidly, but human behaviour remains as complex as ever. The real challenge in healthcare UX is building AI-driven solutions that align with real needs. By recognising AI’s limitations and using it responsibly, we can create digital health products that are effective, ethical and sustainable.
Want to learn more about Digital Transformation and the impact of AI in Healthcare? Visit pocketworks.co.uk, or drop us a line hello@pocketworks.co.uk.
Rishi Garg offers professional consultations in health and nutrition and serves as a wellness advisor, guiding individuals toward achieving optimal health and well-being.
Wellness360 by Dr. Garg delivers the latest health news and wellness updates—curated from trusted global sources. We simplify medical research, trends, and breakthroughs so you can stay informed without the overwhelm. No clinics, no appointments—just reliable, doctor-reviewed health insights to guide your wellness journey