Upol Ehsan (GT), Philipp Wintersberger (TU Wien Austria), Elizabeth Anne Watkins (Intel Labs), Carina Manger (Technische Hochschule Ingolstadt), Gonzalo Ramos (Microsoft Research), Justin D. Weisz (IBM Research), Hal Daumé III (University of Maryland & Microsoft Research), Andreas Riener (Technische Hochschule Ingolstadt), Mark Riedl (GT).
Explainability is an essential pillar of Responsible AI. Explanations can improve real-world efficacy, provide harm mitigation levers, and can serve as a primary means to ensure humans’ right to understand and contest decisions made about them by AI systems. In ensuring this right, XAI can foster equitable, efficient, and resilient Human-AI collaboration. In this workshop, we serve as a junction point of cross-disciplinary stakeholders of the XAI landscape, from designers to engineers, from researchers to end-users. The goal is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Consequently, we call for position papers making justifiable arguments (up to 4 pages excluding references) that address topics involving the who (e.g., relevant diverse stakeholders), why (e.g., social/individual factors influencing explainability goals), when (e.g., when to trust the AI’s explanations vs. not) or where (e.g., diverse application areas, XAI for actionability or human-AI collaboration, or XAI evaluation). Papers should follow the CHI Extended Abstract format and be submitted through the workshop’s submission site (https://hcxai.jimdosite.com/).
Explainability is an essential pillar of Responsible AI. Explanations can improve real-world efficacy, provide harm mitigation levers, and can serve as a primary means to ensure humans’ right to understand and contest decisions made about them by AI systems. In ensuring this right, XAI can foster equitable, efficient, and resilient Human-AI collaboration. In this workshop, we serve as a junction point of cross-disciplinary stakeholders of the XAI landscape, from designers to engineers, from researchers to end-users. The goal is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Consequently, we call for position papers making justifiable arguments (up to 4 pages excluding references) that address topics involving the who (e.g., relevant diverse stakeholders), why (e.g., social/individual factors influencing explainability goals), when (e.g., when to trust the AI’s explanations vs. not) or where (e.g., diverse application areas, XAI for actionability or human-AI collaboration, or XAI evaluation). Papers should follow the CHI Extended Abstract format and be submitted through the workshop’s submission site (https://hcxai.jimdosite.com/).
Location
Hamburg, Germany
Start Date
End Date