We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Reducing the risks of open source AI models and optimizing upsides

Formal Metadata

Title
Reducing the risks of open source AI models and optimizing upsides
Title of Series
Number of Parts
798
Author
Contributors
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Leaders in developing AI from openAI, Deepmind and Anthropic signed the following statement : "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." What exactly is that risk, where does it come from, and how can the open source community work on AI to be as beneficial as possible while avoiding these risks? Jonathan Claybrough, software engineer from the European Network for AI Safety will briefly introduce the topic and main sources of risk, in about ten minutes, then will open the floor to an expert panel of speakers on AI governance and open source. We plan to interact heavily with the audience so that the open source community gets represented in AI governance. The panel experts will be Alexandra Tsalidis from the Future of Life Institute's Policy team and Felicity Redel on the foresight team at ICFG. Stefania Delprete, data scientist with extensive experience with the opensource community (Python and Mozilla), will be moderating the session. Key points of the presentation - Current vulnerabilities you expose yourself to using AI models (low robustness, hallucinations, trojans, ..) - Open weights of AI models doesn't bring the guarantees of open source (you can't read the code, debug, modify precisely) - Steps to reduce user (developers) risk (model cards, open datasets) - Steps to reduce misuse risk (capability evaluations, scoped applications, responsible release) Expert panel debate questions: - What are downside risks of unrestricted open source AI proliferation? - What would a governance of open source AI models that leads to good outcomes look like? - How can the open source community contribute to AI Safety?