Conversational Health Interfaces @CUI2025

This workshop aims to foster a dialogue to explore these challenges and opportunities, discussing how to optimize the beneficial impacts of LLM-powered CUIs on health and wellbeing while effectively managing and mitigating associated risks. Participants will engage in examining themes such as privacy-centric conversational interventions, proactive and adaptive strategies and agency which are pivotal in designing CUIs that are not only effective but also ethically sound and user-friendly. These discussions will help in developing frameworks and collaborations that will guide the ethical development and practical application of CUIs in health and wellness, ensuring they enhance rather than compromise user wellbeing.


Workshop Themes

  • Structuring Conversations Design conversational interactions that respect situational context
  • Privacy and Trust Concerns Address privacy issues in CUI, focusing on discreet and secure interactions.
  • User Groups and Personalization Tailor CUI to diverse user groups, enhancing personalization and relevance.
  • Proactive Interventions Design CUIs that proactively support health and wellness based on user behavior.
  • User Agency Enhance user control and agency in interactions with CUI.


Call for Participation

We invite researchers, practitioners, and enthusiasts from various disciplines to share their insights and contribute to the synthesis of knowledge in this area of CUI and health and wellbeing. While formal paper submissions are welcomed, they are not mandatory for participation.

Attendees may choose to engage by submitting:

Position papers or short research papers (up to 4 pages, following the ACM Extended Abstract format detailing studies, novel systems, new theories, or ongoing challenges in the field.

Short expressions of interest that outline their background and interest in the workshop themes. These can be submitted via email to the organizers and should include a brief description of the applicant's relevant experience and a link to their professional or scholarly webpage.

Paper presentations – If you’ve published work related to conversational agents and healthcare, we’d love to include it in our session!

Submit your papers as a single PDF via email to shashank.ahire@hci.uni-hannover.de


Key Dates

  • May 16, 2025 June 3, 2025: Submission deadline
  • June 7, 2025: Acceptance notification [will be adjusted based on early-bird conference registration due]
  • June 20, 2025: Camera-ready deadline
  • July 08, 2025: Workshop day

Organizers

Shashank Ahire is a PhD candidate in the Human-Computer Interaction group at Leibniz University Hannover. His research focuses on developing proactive voice interventions for the health and wellbeing of knowledge workers.

Melissa Guyre is a Product Management Lead at Panasonic Well, focusing on AI-driven family wellness product incubation, including the development of a conversational AI Family Wellness Coach.

Bradley Rey is an Assistant Professor at the University of Winnipeg in the Department of Applied Computer Science. His research focuses designing and developing in-situ wearable interfaces that empower people to better explore and make sense of their personal health data anytime and anywhere

Minha Lee is an Assistant Professor at the Eindhoven University of Technology in the Department of Industrial Design, with a background in philosophy, digital arts, and HCI. Her research concerns morally relevant interactions with various agents like robots or chatbots, exploring moral concepts like compassion and trust.

Heloisa Candello is a research scientist and manager of the Human-centered and Responsible Technologies group at IBM Research. Her work focuses on human and social aspects of Artificial Intelligence systems, particularly CUI.


Workshop Paper

Title - Focus on the Successes: The Conflict between the Goals of “Data Science” and What Patients Want from Conversational Robots in Healthcare

Authors - Casey C. Bennett

📄 Download Paper

Title - Passive At-Home Health Monitoring Systems for Older Adult Care

Authors - Elaine Czech, Aisling Ann O’Kane, Kenton O’Hara, Eleni Margariti, Abigail Durrant, David Kirk, and Ian Craddock

📄 Download Paper

Conversational Health Interfaces in the Era of LLMs: Designing for Engagement, Privacy, and Wellbeing

Reflections from our CUI 2025 workshop with researchers, designers, and practitioners.

At CUI 2025, our workshop “Conversational Health Interfaces in the Era of LLMs: Designing for Engagement, Privacy, and Wellbeing” brought together researchers, designers, and practitioners to explore the promises and pitfalls of using conversational user interfaces (CUIs) for health and wellbeing. Across group activities, we unpacked challenges around bias and fairness, user agency in stress interventions, and proactivity in exercise support.

Biases and Fairness in CUIs

One recurring theme was the bias embedded in foundation models and how it manifests in health conversations.

  • Benevolent bias isn’t always benevolent: When agents framed responses positively (e.g., “You’ve achieved a lot at such a young age”), participants perceived them as patronizing or insincere. Forced positivity can drift into toxic positivity.
  • Empathy vs. anthropomorphism: Overly empathetic or human-like responses sometimes triggered negative reactions (an “uncanny valley” effect). Context matters—transactional health advice differs from emotional support.
  • Transparency matters: Users want to know how their history shapes responses. CUIs should clearly explain why certain suggestions are made.
  • Safeguarding vs. misrepresentation: Attempts to make models “safer” can sometimes distort representation, introducing different perceived biases.

Open research questions

  • How can we study bias and fairness ethically without harming participants?
  • What triggers cause a chatbot to appear biased or unfair?
  • How much does “personality” matter in user perceptions of fairness?

Stress Interventions and User Agency

Another group examined how CUIs might deliver stress interventions in office settings—a context where privacy and discretion are paramount.

Key insights

  • Stress detection is tricky: Sensors may misinterpret energy as stress. Users should confirm or adjust predictions rather than being told “you are stressed.”
  • Delivery must respect context: A desk worker in a meeting might want to defer an intervention. CUIs should ask, “Would you like to do this now or later?”
  • Multi-modality is essential: Voice can be intrusive in shared spaces; offering text or silent modes increases accessibility.
  • Customization at onboarding: Let users pre-set preferences (e.g., how to handle stress during meetings) and refine them post-intervention, rather than mid-stress.
  • Transparency and trust: Clarify data sources, access control, and whether employers/colleagues can see information. Agency comes from informed choice.

Takeaway: Stress-support CUIs must empower users with control over when, how, and why interventions occur.

Proactivity in Exercise Support

The third group explored proactive CUIs for exercise, asking how interventions can be encouraging without being intrusive.

  • A fine line between support and overbearing: Enthusiastic prompts may motivate some users but alienate others who feel patronized.
  • Context changes everything: Illness, mood, location, and long-term goals shape what support is appropriate.
  • Beyond metrics: Shift from rigid, goal-based prompts to reflective dialogues (e.g., “How are you feeling right now?”) to foster meaningful, long-term engagement.
  • Ethical concerns: Proactive CUIs risk over-dependence or “optimization addiction.” How much intervention is too much?

This discussion underscored the need for proactive CUIs that listen, adapt, and evolve with users—balancing helpfulness with respect for autonomy.

Closing Thoughts

Across these sessions, one message was clear: Conversational Health Interfaces must walk a delicate line. They need to be proactive without being intrusive, empathetic without being condescending, and supportive without compromising user agency or privacy.

To get there, we must:

  • Build transparency into every interaction
  • Offer flexible modalities for diverse contexts
  • Study bias with ethical, innovative methods
  • Design with the lived experiences of users at the center

The workshop was only the beginning, but it underscored that the future of health CUIs depends not just on smarter models—but on smarter design choices that empower people to take charge of their health and wellbeing.