We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

7th HLF – Lecture: Can We Trust Autonomous Systems? Boundaries and Risks

Formal Metadata

Title
7th HLF – Lecture: Can We Trust Autonomous Systems? Boundaries and Risks
Title of Series
Number of Parts
24
Author
License
No Open Access License:
German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Can we trust autonomous systems? This question arises urgently with the perspective of massive use of AI-enabled techniques in autonomous systems, critical systems intended to replace humans in complex organizations. We propose a framework for tackling this question and bringing reasoned and principled answers. First, we discuss a classification of different types of knowledge according to their truthfulness and generality. We show basic differences and similarities between knowledge produced and managed by humans and computers, respectively. In particular, we discuss how differences in the system development process of knowledge affect its truthfulness. To determine whether we can trust a system to perform a given task, we study the interplay between two main factors: 1) the degree of trustworthiness achievable by a system performing the task; and 2) the degree of criticality of the task. Simple automated systems can be trusted if their trustworthiness can match the desired degree of criticality. Nonetheless, the acceptance of autonomous systems to perform complex critical tasks will additionally depend on their ability to exhibit symbiotic behavior and allow harmonious collaboration with human operators. We discuss how objective and subjective factors determine the balance in the division of work between autonomous systems and human operators. We conclude emphasizing that the role of autonomous systems will depend on decisions about when we can trust them and when we cannot. Making these choices wisely, goes hand in hand with compliance with principles promulgated by policy-makers and regulators rooted both in ethical and technical criteria. This video is also available on another stream: https://hitsmediaweb.h-its.org/Mediasite/Play/e1dbc878bf6b4df6b47236c56cc0b6241d?autoStart=false&popout=true The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video. More information to the Heidelberg Laureate Forum: Website: http://www.heidelberg-laureate-forum.org/ Facebook: https://www.facebook.com/HeidelbergLaureateForum Twitter: https://twitter.com/hlforum Flickr: https://www.flickr.com/hlforum More videos from the HLF: https://www.youtube.com/user/LaureateForum Blog: https://scilogs.spektrum.de/hlf/