CHI’23 workshop on
Intelligent Data-Driven Health Interfaces
This workshop will explore future work in the area of intelligent, conversational, data-driven health interfaces both from patients’ and health care professionals’ perspectives.
We aim to bring together a diverse set of experts and stakeholders to jointly discuss the opportunities and challenges at the intersection of public health care provisioning, patient and caretaker empowerment, monitoring provisioning of health care and its quality. This will require AI-supported, conversational decision-making interfaces that adhere to ethical and privacy standards and address issues around agency, control, engagement, motivation, and accessibility.
The goal of the workshop is to create a community around intelligent data-driven interfaces and create a road map for their future research.
You can read our CHI 2023 Workshop Paper here.
Important dates:
Position paper submission deadline: February 15th (anywhere on earth) on EasyChair here.
Notifications of acceptance: February 28th
Camera ready submissions due: March 7th
Pre-workshop café: March 15th on Zoom at 16:00 CET (passcode: 873596)
Workshop date & place:
Sun 23-Apr at 9:00 – 17:30 in room X05,
at 19:00 dinner at Altes Mädchen (20min walk from the congress center).
Call for papers
Authors are invited to submit up to one page (ACM single-column format) entries detailing their interest, past, current, and future work in the topics of the workshop. Below follows a non-exhaustive list of potential topics to be discussed using the non-exhaustive scenarios in the user journeys of health care professionals and patients depicted in Figure 1.
Agency and control: Ethical and regulatory standards are high for decision-making that directly impacts patients’ health and care (patient-facing) or indirectly impact them through changes in health care provisioning (HCP-facing). How much guidance should intelligent interfaces provide for HCPs trying to find causes for poor care provisioning? To what degree should these interfaces restrict actions likely to lead to spurious conclusions akin to p-hacking? How, and how many insights requiring human follow-up should intelligent dashboards present to users? What roles should auditing and logging interactions play? How should conversational user interfaces integrate with data dashboards, visualisations and draw on tools such as data storytelling?
Engagement and motivation: Participation of patients in user-centred design has the potential to improve individualised healthcare decisions by better meeting user needs. How do we, intrinsically or extrinsically, motivate patients with different outlooks, e.g. those who are not interested in taking a more active stance in their health care? How does motivation depend on other aspects? Do we need to motivate clinicians entering and analysing data and if so, how? How to best motivate patients to participate in data-driven technology development and evaluation, particularly for elderly and disabled patient groups? What are the cost-effective incentives to improve patient engagement as well as the utilisation of the developed technology? How can we nudge users despite the asymmetry between patients and clinicians that currently impedes the sharing and reviewing of recorded data?
Accessibility and understanding: How easily can the processes for data entry and review be understood and undertaken by novice users who may have cognitive, communicative, sensory and/or mobility disabilities? How can systems accommodate the needs of users with existing, newly acquired, or degenerative conditions over time? In cases of severe impairments, how can design support users to engage in data collection and reflection activities? How do we best onboard patients and clinicians? How can we make complex, AI-supported, decision support tools more transparent and accessible to patients and caretakers? How might we provide insights from decision support tools, which are actionable for people to manage their health?
Trust and data veracity: How do we address shortcomings in conversational ability of intelligent interfaces that might erode trust in the provided insights and advice? How should advice dispensed by automated decision making systems be validated and who bears responsibility when the individual profile data providing the basis for the advice is inaccurate? How should systems structure interactions to assess and cross-check entered data to ensure high veracity that does not burden HCPs?
Privacy: How do we effectively employ data minimisation strategies to ensure privacy and, at the same time, create rich repositories and registries of data? Data-driven projects may want to collect data for unforeseen purposes e.g. to avoid Simpson’s paradox, combine data-sets, or even create “data lakes”. This becomes a privacy issue, since there can be significant expansion of user profiling activities, inference of new data, etc. How do we create friendly consent interfaces that do not impede on-boarding processes and still meet their purpose? How can patient and user consent be managed dynamically? For example, when users change their minds, or are asked for consent for additional data and processing purposes. How do we effectively return control of data to users? How can we enable transparency and the ability to intervene in systems so that users know what data has been collected, how it is processed and who has been accessing the data? This must specifically allow users to exercise various privacy rights such as access, correction, deletion and object processing whenever possible in the healthcare context.