Artificial Intelligence
––
May 2025

Smarter UX With AI: Trials, Portals, Devices

Written by
Create Ape
and
reviewed by
Reviewed by

AI-First UX Isn’t Hype, It’s the Quiet Healthcare Revolution

AI-first UX strategies aren’t just a trend, they’re quietly driving measurable progress in healthcare. While flashy AI headlines often focus on large language models or robotic surgery, the real transformation is happening at the intersection of design and data: where smart interfaces shape better care experiences.

Across clinical trials, patient portals, and medical devices, product teams are embedding AI into their UX frameworks, not as a bolt-on feature, but as an integrated strategy to solve complex problems. They’re using behavioral data to reduce drop-off during trial enrollment. They’re tailoring interfaces to the needs of caregivers versus patients. They’re even adapting device interfaces in real time based on how, when, and where people use them.

In this post, we’ll break down three real-world examples of AI-first UX strategies, each solving a different, high-stakes challenge in healthcare, and extract key patterns that forward-thinking product teams can use to guide their own innovation roadmap.

II. Clinical Trials: AI-Enhanced Enrollment at UPMC

The Problem:

Clinical trials have long struggled with sluggish enrollment and high dropout rates, especially among underrepresented populations. These challenges stem from a mix of systemic and UX issues; complex medical jargon, impersonal intake flows, and limited adaptability during the screening process. In traditional trials, once a participant begins the enrollment process, there’s little room for adjustment if confusion arises or hesitancy is detected. That rigidity leads to disengagement and ultimately, abandonment.

The Solution:

The University of Pittsburgh Medical Center (UPMC) became a key site in the REMAP-COVID trial, an international adaptive platform that used artificial intelligence to speed up and refine the clinical trial process during the COVID-19 crisis. This wasn’t just a shift in tech, it was a reimagining of trial flow logic.

Instead of relying on static protocols, the AI system processed patient data in real time and dynamically adjusted the course of the trial; modifying treatment allocations, rebalancing randomization arms, and fine-tuning participant intake based on evolving evidence. This adaptability extended to the UX level as well: trial participants didn’t experience the usual wall of questions or dense medical terms. Instead, the process became more responsive, shortening or extending based on behavior signals and contextual input. This made the experience more human, especially for participants with lower health literacy or digital familiarity.

The Result:

The REMAP-COVID approach helped UPMC and its global partners accelerate enrollment and treatment evaluation at a time when speed was not just important, it was life-saving. The AI-driven model also provided a foundation for reducing drop-off rates by aligning trial procedures with real-world participant behavior. This trial wasn’t designed for the “ideal user” on paper, it adapted to the complex, imperfect, and often stressed real-life user, especially during a pandemic. More broadly, the REMAP-COVID model demonstrated how AI-first design strategies can bridge the gap between clinical efficiency and user-centered care. Instead of forcing patients to adapt to the system, the system adapted to them, bringing enrollment equity and accessibility into focus.

III. Patient Portals: Personalizing Education with PaniniQA

The Problem:

Digital patient portals have become standard in healthcare, but too often, they fail to make an impact. Most deliver generic documents, unengaging dashboards, and templated discharge summaries that leave patients confused about their next steps. This disconnect isn’t just a design flaw, it’s a clinical risk. When patients don’t understand instructions or forget what they read, the result is poor adherence, avoidable readmissions, and increased burdens on care teams. The UX is passive when it should be proactive.

The Solution:

PaniniQA represents a leap forward in transforming post-discharge communication. Developed by researchers and evaluated in a clinical setting, PaniniQA is an AI-powered question-answering tool that restructures how patients engage with their discharge instructions. Instead of reading static text, patients interact with the material, answering personalized questions that help them reflect on key information, identify gaps in understanding, and receive real-time corrective feedback. Think of it as a mini “teach-back” system embedded into the portal experience. PaniniQA dynamically parses discharge content, identifies crucial clinical concepts, and then builds a conversational layer around them. The goal isn’t just comprehension, it’s retention and engagement, two variables that traditional EHR interfaces almost entirely neglect.

The Result:

Evaluations of PaniniQA showed a clear improvement in patient knowledge retention and adherence to medical instructions. Patients weren’t just passively consuming information, they were tested on it, corrected in real time, and supported through a feedback loop that mimics how a nurse or caregiver might follow up in person. Importantly, this experience built confidence and accountability without adding friction.

This case highlights the real power of AI-first UX: it doesn’t overwhelm the user with information. It scaffolds understanding. And in doing so, it helps build patient self-efficacy, one of the most powerful predictors of long-term health behavior change.

there's a man on top of a pedestal, orchestrating all ux team's members, in order for AI to work properly

IV. Connected Devices: Context-Aware Interfaces by Empatica

The Problem:

Connected medical devices are often designed with the assumption that users interact with them in controlled environments:sitting still, paying attention, following protocol. But real-world conditions are far messier. People wear these devices while exercising, working, sleeping, and caregiving. The result? A static interface doesn’t cut it. When a device’s user flow doesn’t adapt to context, when it treats every input as equal regardless of timing, movement, or urgency, it invites mistakes. This leads to missed signals, error-prone alerts, and disengaged users who lose trust in the system.

The Solution:

Empatica, a pioneer in AI-powered wearables, flipped the design script. Instead of assuming stability, their devices embrace variability. The Embrace2 and EmbracePlus devices are FDA-cleared for seizure monitoring and built to dynamically respond to environmental and physiological conditions. They continuously track data like skin temperature, electrodermal activity, motion, and circadian patterns. But where they stand out is in their adaptive interface behavior: notifications, alerts, and data streams are intelligently filtered and modulated depending on when, how, and why a user is engaging with the device.

This is AI-first UX in action, not just gathering data, but translating it into contextually relevant outputs. At night, when the user is asleep, the device reduces unnecessary noise. When it detects heightened stress or seizure risk, it elevates urgency. The goal isn’t more alerts, it’s smarter ones.

The Result:

Empatica’s wearables are now used in both clinical trials and everyday health monitoring across the globe. They’re trusted not because they do more, but because they do less, better; surfacing what matters when it matters most. By reducing cognitive overload and adapting UI/UX flows to real-world use, these devices have demonstrated a measurable drop in error rates and higher sustained engagement from users and clinicians alike. This isn’t just interface design, it’s experience design built on contextual intelligence. Empatica shows that medical UX doesn’t need to be rigid to be reliable. In fact, the opposite is true: flexibility, driven by AI, is what enables safety, clarity, and trust.

V. Patterns Across All Three Use Cases: How AI-First UX Redefines Product Strategy

Across clinical trials, patient portals, and connected medical devices, one thing is clear: AI-first UX is not a feature, it’s a strategic design choice that’s reshaping how healthcare products work, evolve, and earn trust. The three case studies: UPMC’s adaptive trial model, PaniniQA’s educational layer, and Empatica’s wearable interfaces, share more than just technical sophistication. They embody a new design paradigm built around context, collaboration, and continuous adaptation.

Let’s break down what these teams got right, and what your team can take away.

1. AI Was Integrated Early, Not Layered On Late

In each case, AI wasn’t used as a last-mile optimization. It was built into the user journey from the start. UPMC’s trial logic was AI-powered from enrollment to randomization. PaniniQA’s question engine was designed to be the patient interface. Empatica’s wearables didn’t just collect data, they responded to it dynamically.

Why it matters: When AI is integrated early, UX becomes predictive and adaptive, not reactive. You build a product that learns with the user, not just delivers static content. That means better engagement, better outcomes, and less rework later.

For UX leaders: Stop thinking of AI as a bolt-on. Design your interfaces with machine learning in mind from the very first wireframe.

2. Real-Time Context Drove UX Decisions

All three products adjusted based on when, where, and how they were being used. In trials, AI responded to participant hesitation. In portals, it adapted to comprehension gaps. In wearables, it shifted output based on sleep cycles or stress signals. This is UX that responds, not just performs.

Why it matters: In dynamic environments like healthcare, static flows can’t keep up. Context-aware interfaces reduce friction, improve clarity, and support safer decision-making.

For product managers: If your interface acts the same at 2 a.m. as it does at 2 p.m., you’re ignoring how humans actually use technology.

3. Trust Was Built Through Personalization, Not Just Functionality

In all three use cases, personalization wasn’t just about aesthetics. It shaped how patients were guided, how much control they felt they had, and whether they trusted the product enough to keep using it. This is especially critical in health, where uncertainty and vulnerability are baked into the user experience.

Why it matters: People don’t just need accurate tools, they need to feel safe using them. Adaptive experiences help users feel seen and supported, which increases adherence and satisfaction.

For CMOs: Personalization isn’t a nice-to-have, it’s a revenue and retention lever in an industry where churn can have clinical consequences.

4. Compliance Was a Co-Designer, Not a Roadblock

Every example here worked within (or was certified by) regulatory frameworks: REMAP-COVID was globally aligned, PaniniQA was academically evaluated, and Empatica’s devices were FDA-cleared. These weren’t innovation experiments waiting for retroactive approval, they were designed with compliance in mind.

Why it matters: In regulated spaces, time-to-market isn’t just about speed, it’s about safety and credibility. Early collaboration with compliance protects innovation from being gutted during review.

For enterprise teams: Bring compliance into your sprint cycles, not just your audits. Great UX isn’t risky if it’s architected with guardrails.

5. The North Star Was Outcomes, Not Features

What unified these teams wasn’t a tech stack, it was intent. They weren’t building to showcase AI. They were solving for enrollment efficiency, patient comprehension, and usability under pressure. Every design decision served those goals.

Why it matters: Too often, AI projects drift into experimentation mode. But in healthcare, stakes are too high for shiny-object syndrome. AI-first UX must deliver outcomes that matter; to patients, clinicians, and business stakeholders alike.

For executive sponsors: Set your metrics early. Make AI serve the mission, not the menu of possibilities.

a comparison between a man and a woman using two different interfaces. The man at the left seems happy. there's en engine working under the desk, representing harmony.  Also, he's working with an electronic pen on a digital pad. At the left there's a woman looking at an interface with a serious face, while working on a physical paper.

AI-First UX Isn’t Just Smart, It’s Necessary

From flexible trials to intelligent education and adaptive interfaces, these examples show how AI-first UX is already delivering impact in regulated healthcare settings. Not by making products more complicated, but by making them more human. In a space where safety, clarity, and trust are everything, AI is no longer optional, it’s the only path forward for scalable, usable, and future-proof healthcare design.

If your product isn’t learning from users, it’s falling behind. Design smarter. Move faster. Build trust at every interaction.

The next breakthrough in healthcare won’t come from new features, it’ll come from better experiences. Start there.

Our editorial team ensures all content meets the highest standards for accuracy and clarity. This article has been reviewed by multiple specialists.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

link

Unordered list

  • Item A
  • Item B
  • Item C

Our editorial team ensures all content meets the highest standards for accuracy and clarity. This article has been reviewed by multiple specialists.
Written by
Create Ape
Content creation and research
Review by
Technical accuracy validation
Last updated:
June 11, 2025