We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Artificial Intelligence: Why Explanations Matter

Formal Metadata

Title
Artificial Intelligence: Why Explanations Matter
Title of Series
Number of Parts
18
Author
Contributors
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In the rapidly evolving field of Artificial Intelligence (AI), the importance of understanding model decisions is becoming increasingly vital. This talk explores why explanations are crucial for both technical and ethical reasons. We begin by examining the necessity of explainability in AI systems, particularly in mitigating unexpected model behavior, biases and addressing ethical concerns. The discussion then transitions into Explainable AI (XAI), highlighting the differences between interpretability and explainability, and showcasing methods for enhancing model transparency. A real-world examples will demonstrate how these concepts can be practically employed to improve model performance. The talk concludes with reflections on the challenges and future directions in XAI. --------------------- About the speaker(s): Albert Weichselbraun is a Professor of Information Science at the Swiss Institute for Information Research at the University of Applied Sciences of the Grisons in Chur, and cofounder and Chief Scientist at webLyzard technology. He has authored over 90 peer-reviewed research publications and has been a member of the expert group on communication science of the Swiss Academies of Art and Sciences.