We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Reducing the risks of open source AI models and optimizing upsides

Formale Metadaten

Titel
Reducing the risks of open source AI models and optimizing upsides
Serientitel
Anzahl der Teile
798
Autor
Mitwirkende
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Leaders in developing AI from openAI, Deepmind and Anthropic signed the following statement : "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." What exactly is that risk, where does it come from, and how can the open source community work on AI to be as beneficial as possible while avoiding these risks? Jonathan Claybrough, software engineer from the European Network for AI Safety will briefly introduce the topic and main sources of risk, in about ten minutes, then will open the floor to an expert panel of speakers on AI governance and open source. We plan to interact heavily with the audience so that the open source community gets represented in AI governance. The panel experts will be Alexandra Tsalidis from the Future of Life Institute's Policy team and Felicity Redel on the foresight team at ICFG. Stefania Delprete, data scientist with extensive experience with the opensource community (Python and Mozilla), will be moderating the session. Key points of the presentation - Current vulnerabilities you expose yourself to using AI models (low robustness, hallucinations, trojans, ..) - Open weights of AI models doesn't bring the guarantees of open source (you can't read the code, debug, modify precisely) - Steps to reduce user (developers) risk (model cards, open datasets) - Steps to reduce misuse risk (capability evaluations, scoped applications, responsible release) Expert panel debate questions: - What are downside risks of unrestricted open source AI proliferation? - What would a governance of open source AI models that leads to good outcomes look like? - How can the open source community contribute to AI Safety?