Artificial Intelligence
––
May 2025

5 Dangerous Myths About Using AI in Healthcare UX

Written by
Create Ape
and
reviewed by
Reviewed by

In healthcare, design doesn’t happen in a vacuum, and neither does AI. When artificial intelligence is introduced into clinical tools, it becomes part of the care environment. It influences how data is interpreted, how decisions are made, and how risks are managed. It becomes the silent participant in critical moments; sometimes enhancing care, sometimes distorting it.

Yet too often, teams approach AI as a feature to be layered in, not as an experience to be designed. This mindset creates blind spots: assumptions that the tech will simply make things “smarter,” that clinicians will adapt without friction, or that delaying AI adoption is the safest path. These assumptions don’t just stall innovation, but can compromise care, increase liability, and erode trust in your product.

What follows are five of the most persistent myths that lead MedTech teams down these dangerous paths and what the most forward-thinking teams are doing instead to design safe, adaptive, and trustworthy healthcare experiences.

Myth #1: “AI Will Make the Interface Smarter On Its Own”

Reality Check: AI amplifies patterns, it doesn’t fix them.

There’s a widespread misconception that simply adding AI to a product will make the experience “smarter.” What AI actually does is amplify what already exists, good or bad. If your UX lacks clarity, logic, or alignment with clinical workflows, AI will only make those problems more opaque and more dangerous.

Intelligence isn’t in the model, it’s in how that model is embedded into the experience: how its suggestions are surfaced, explained, and acted on under real conditions. Without that intentional scaffolding, AI becomes a source of friction rather than support.

Real-World Example: Babylon Health’s AI Chatbot Misfires

Babylon Health, a UK-based digital health provider, launched an AI chatbot to triage patient symptoms. The intent was to reduce burden on providers and help users self-assess health risks. But the results raised major red flags. Reports emerged that the chatbot downplayed serious symptoms like chest pain, suggesting panic attacks instead of possible heart attacks.

The problem wasn’t just in the model’s training, it was in the UX. Users were not given transparency into how the AI reached its conclusions, nor were they guided through clear, contextual follow-up steps. The tool’s limitations weren’t surfaced; its “intelligence” was presented as trustworthy when it wasn’t ready for the complexity of clinical triage.

Regulatory fallout followed. The UK’s Care Quality Commission investigated. Public trust plummeted. Babylon ultimately pivoted its business and faced criticism that could have been mitigated with better UX design, clearer safeguards, and human-in-the-loop support.

Lessons learned: Why UX Still Leads

  1. AI Magnifies, It Doesn’t Mend: AI enhances what’s already present in your UX, whether that’s friction or flow. If your design logic is clunky or your interface is vague, AI won’t clarify it… it’ll compound the problem at scale.
  1. Explainability Is Design, Not Just Tech: A clinician doesn’t need to see code. But they do need intuitive explanations, visible logic, and clear next steps; embedded visually and contextually. That’s a UX job, not just a modeling one.
  1. Design for Uncertainty, Not Just Output: In clinical settings, even accurate AI suggestions can backfire if users feel uncertain about when (or whether) to trust them. Your UX must guide users through that uncertainty with clarity and choice.

Smart Strategies: Embedding Intelligence with Intent

  1. Scaffold the AI, Don’t Just Surface It: Show users what the AI is doing behind the curtain: highlight relevant inputs, allow for overrides, and provide fallback options when confidence is low.
  1. Design “Pause Points” into Workflows: When a clinician sees an AI suggestion, what can they do next? Integrate action paths like “flag for review,” “override,” or “view rationale” to prevent silent errors.
  1. Align AI Moments with Clinical Judgment Moments: Map out where AI decisions appear during tasks and whether that moment supports or distracts from critical thinking. Aligning the two is where real safety lives.
A woman looks distressed. She is looking at her phone and a chatbot message pops up that says "it's probably just anxiety, take a deep breath'. Behind her, there's an EKG reading, showing alarming results
When AI talks, but no one understands

Myth #2: “We Don’t Need AI Yet”

Reality Check: Delaying AI Integration Increases Risk and Hinders Progress

In the rapidly evolving landscape of healthcare technology, postponing the integration of Artificial Intelligence into user experience design is a strategic misstep. While caution is essential in healthcare, excessive delay can lead to missed opportunities, increased operational inefficiencies, and diminished patient care quality.

Consequences of Delayed AI Integration:

  • Operational Inefficiencies: Without AI, healthcare systems may continue to rely on manual processes, leading to slower decision-making and increased workload for clinicians.
  • Competitive Disadvantage: As competitors adopt AI to enhance diagnostics, personalize patient care, and streamline operations, organizations that delay may find themselves lagging in innovation and market relevance.
  • Regulatory Challenges: Regulatory bodies are increasingly recognizing AI as a component of standard care. Delayed adoption may result in non-compliance with emerging standards and expectations.
  • Patient Dissatisfaction: Modern patients expect timely, personalized, and efficient care. Failure to integrate AI can lead to subpar patient experiences and satisfaction.

Real-World Example: Cera’s Proactive AI Integration

Cera, a UK-based healthcare company, exemplifies the benefits of early AI adoption. By integrating AI into their care services, Cera developed tools like the Hospitalisation Predict-Prevent system, which forecasts 80% of health risks in advance, reducing hospitalizations by up to 70%. Additionally, their Falls Prevention AI predicts 83% of falls in advance, decreasing patient falls by 20%. These proactive measures not only improved patient outcomes but also alleviated pressure on healthcare providers and systems.  

Lessons Learned: Why Waiting Is the Bigger Risk

  1. Inaction Is a Liability, Not a Shield: In MedTech, not deploying AI doesn’t preserve safety, it delays improvements, multiplies manual errors, and invites competitors to outpace you in outcomes and compliance readiness.
  1. Regulators Are Moving, so Should You: Emerging regulatory frameworks are increasingly expecting explainable, adaptive systems. Teams that wait may find themselves forced to retrofit AI under pressure, rather than proactively shaping it with safety and clarity.
  1. Patient Expectations Have Already Shifted: Today’s patients expect personalization, precision, and speed. Delaying AI adoption means delivering outdated experiences and undermining the credibility of your product in a digitally native market.

Smart Strategies: Building AI Readiness Without Burning Out

  1. Start with Low-Risk, High-Return Pilots: Deploy AI in areas like administrative triage or documentation assistance where success is measurable and risks are low. Use that data to build internal trust and scale wisely.
  1. Tie AI Integration to Measurable Clinical Outcomes: Don’t chase novelty, tie every implementation to a patient or provider benefit (e.g., reduced time-to-diagnosis, decreased burnout, faster documentation turnaround).
  1. Build Regulatory Foresight Into Your Roadmap: Track what bodies like the FDA, EMA, or MHRA are signaling about AI. Early compliance alignment is a strategic differentiator when scaling internationally.
Comparsion of an interface with alert signs and an ideal interface. A doctor is standing in front of the two interfaces, but his facials expressions change  in front of each interace.

Myth #3: “We Can Just Plug in a Model”

In the realm of healthcare technology, there’s a prevailing misconception that integrating an AI model into existing systems is a straightforward process. This belief overlooks the complexities of clinical workflows, the nuances of user experience (UX) design, and the critical importance of aligning AI outputs with real-world medical practices. Simply “plugging in” an AI model without thoughtful integration can lead to confusion, mistrust, and underutilization of the technology.

Case Study: IBM Watson for Oncology

IBM Watson for Oncology was heralded as a revolutionary AI system capable of providing evidence-based treatment recommendations for cancer patients. Trained using data from Memorial Sloan Kettering Cancer Center, Watson aimed to assist oncologists by analyzing vast amounts of medical literature and patient data. However, the system faced significant challenges that hindered its effectiveness and adoption.

Key Issues Identified:

  • Limited Training Data Diversity: Watson’s knowledge base was heavily influenced by practices at a single institution, leading to recommendations that didn’t align with diverse global medical guidelines. 
  • Inadequate UX Design: Clinicians reported that Watson’s interface was not user-friendly and often disrupted their workflow, indicating a lack of end-user involvement during development.  
  • Transparency Concerns: The system’s decision-making process was opaque, making it difficult for users to trust its recommendations.  
  • Over-reliance on Curated Data: Watson primarily relied on pre-fed guidelines and lacked the ability to learn dynamically from real-world patient cases.  

These challenges culminated in the system providing treatment recommendations that were sometimes inconsistent with established medical practices, leading to skepticism among healthcare professionals and eventual scaling back of the project.

Lessons Learned: Why AI Needs UX to Function

  1. Plug-and-Play Doesn’t Work in Clinical Environments: AI tools can’t just “slot in.” They must be shaped around complex workflows, user roles, and clinical decision-making processes. Otherwise, they confuse more than they clarify.
  1. Trust Requires UX, Not Just Accuracy: Even the most accurate AI won’t be used if clinicians don’t trust it. That trust is earned through frictionless workflows, visible logic, and user agency; elements only UX can deliver.
  1. Context Is the Missing Layer: AI without UX context becomes noise. A diagnosis suggestion without urgency indicators, confidence scores, or patient-specific nuance isn’t helpful, it’s a guessing game wrapped in math.

Operationalizing Intelligence Thoughtfully

  1. Map Model Inputs to UX Touchpoints: Design the interface to reflect what the model sees and uses, especially when confidence is low or data is incomplete. Let users peek under the hood without being overwhelmed.
  1. Create Clear Escalation and Override Paths: UX should make it obvious how to accept, question, or bypass AI suggestions. No dead ends, no assumptions. Just clinical freedom, clearly designed.
  1. Design for Clinical Variation, Not Uniformity: Build interfaces that can flex across specialties, contexts, or locations. A rigid “universal” model is more likely to fail than a modular UX that adapts intelligently.

Myth #4: “AI Doesn’t Belong in UX”

The traditional view that UX pertains solely to visual design and user interfaces is outdated. Modern UX encompasses the entire user journey, including how information is presented, how decisions are supported, and how users interact with complex systems. AI plays a pivotal role in enhancing these aspects by providing personalized, efficient, and intelligent interactions that are crucial in healthcare settings.

Case Study: Moorfields Eye Hospital and DeepMind’s AI Collaboration

Moorfields Eye Hospital, in partnership with DeepMind, developed an AI system capable of analyzing 3D retinal scans to detect over 50 eye diseases. This collaboration exemplifies the successful integration of AI into healthcare UX:

  • Enhanced Diagnostic Accuracy: The AI system matched world-leading expert performance in diagnosing eye conditions, ensuring patients receive accurate assessments. 
  • Improved Workflow Efficiency: By rapidly analyzing scans, the system reduced the time clinicians spent on diagnosis, allowing them to focus more on patient care.
  • Transparent Decision-Making: The AI provided visual representations of its analysis, enabling clinicians to understand and trust its recommendations.

This case demonstrates that AI, when thoughtfully integrated into UX design, can significantly improve clinical outcomes and user satisfaction.

Updated Lessons Learned: Why UX Is the Delivery Mechanism for AI

  1. AI Shapes Experience…Whether You Design It or Not: Once AI enters the product, it is the experience. It influences what users see, when they see it, and how they act. Ignoring that doesn’t neutralize the effect, it just cedes control.
  1. Static Interfaces Can’t Support Dynamic Intelligence: A rigid UI wrapped around an adaptive AI creates friction. When users don’t see the system adapting to context, behavior, or risk level, they tune it out or worse, stop trusting it.
  1. Design Is the Interface to Intelligence: If AI is the engine, UX is the steering wheel. The only way to safely operationalize AI is to embed it in workflows, design for judgment calls, and create space for nuance.

Smart Strategies: Merging Design and Intelligence

  1. Think Beyond Screens, design for State Shifts: What happens when a patient’s vitals spike? When risk models shift mid-task? UX must adapt dynamically, surfacing alerts or changing workflows to reflect AI signals in real time.
  1. Build Interfaces That Communicate AI’s Role: Make it obvious when the system is learning, guiding, or adapting. Use visual cues, progressive disclosures, and tone to position AI as a collaborator, not a black box.
  1. Design Feedback Loops Into the Experience: Let users teach the system. Give clinicians the option to flag false positives, adjust suggestions, or report mismatches; closing the loop between human judgment and machine learning.

Myth #5: “If It Works for Consumers, It’ll Work for Clinicians”

In the realm of digital product design, consumer applications often prioritize engagement, simplicity, and user delight. Features like infinite scroll, personalized recommendations, and gamification are commonplace. However, applying these consumer-centric design principles directly to clinical environments can be misguided.

Clinicians operate in high-stakes settings where efficiency, accuracy, and clarity are paramount. Their interactions with digital tools are task-oriented, time-sensitive, and often involve complex decision-making processes. Therefore, designing for clinicians requires a fundamentally different approach that acknowledges their unique workflows, cognitive load, and the critical nature of their tasks.

Case Study: Ambient AI in Clinical Documentation

Kaiser Permanente, a leading healthcare organization, implemented ambient AI technology to assist clinicians with documentation during patient encounters. This AI system listens to conversations between clinicians and patients (with consent) and automatically generates detailed medical notes. The integration of this technology led to significant improvements: 

  • Reduced Administrative Burden: Clinicians experienced a 30% reduction in after-hours documentation, allowing more time for direct patient care.
  • Enhanced Patient Engagement: With less focus on note-taking, clinicians could maintain better eye contact and communication with patients, improving the overall experience.
  • Improved Documentation Quality: The AI-generated notes were more comprehensive and accurate, capturing nuances that might be missed during manual documentation.

This case illustrates the importance of designing AI tools that align with clinical workflows and support clinicians in their tasks without adding complexity or cognitive load.  

Lessons Learned: Why Clinical UX Has Its Own Playbook

  1. Clinicians Work in Interrupt-Heavy, Risk-Rich Environments: Unlike consumers, clinicians don’t browse; they triage, interpret, and act under pressure. Interfaces must strip away friction and uncertainty, not add delight for its own sake.
  1. Precision Beats Engagement: A clean interface that supports accuracy and speed is more valuable than one that feels “sticky.” Confetti doesn’t save lives, well-placed information and reliable interactions do.
  1. Borrowing from B2C Can Create Clinical Debt: When you import consumer design tropes (modals, infinite scroll, playful nudges), you risk cognitive overload, alert fatigue, and user mistrust in high-stakes settings.

Smart Strategies: Building for Clinician Reality

  1. Design for Mental Models, Not Just Visual Simplicity: Align your interface with how clinicians think—prioritize data that informs action, cluster related steps, and use hierarchy to signal urgency.
  1. Pressure-Test UX Under Clinical Conditions: Test workflows during simulated emergencies, team handoffs, or multitasking scenarios. If your interface can’t hold up under strain, it’s not ready.
  1. Respect the Floor, Not Just the Ceiling: Your design should work for the exhausted nurse at hour 10, not just the tech-savvy physician on a calm shift. Build for consistency, not just edge-case elegance.

Consequence: When you borrow consumer design logic for regulated environments, you get alert fatigue, interface avoidance, and critical errors that no one traces back to design.

Good UX Is the Gatekeeper of Good AI

AI is poised to transform healthcare, but this transformation requires deliberate integration. It’s not about adding models at the end of development or assuming consumer design principles will suffice for clinicians. True innovation occurs when UX and AI evolve together. The myths we’ve explored stem from a common misconception: separating intelligence from experience, strategy from usability, or speed from safety. In healthcare, these false separations introduce risk, leading to products that stall, confuse, or fail at the point of care.

Leading MedTech teams recognize this. They’re not merely adopting AI; they’re designing with it. They’re creating UX that doesn’t just present data but knows when to guide, when to step back, and when to learn.

It begins with a single interface… a moment where the experience aligns seamlessly for the clinician, the patient, and the workflow. That’s where thoughtful design meets meaningful impact.

What could that moment look like in your product?