... and justice for AIl
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 131 | |
Author | ||
Contributors | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/69394 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
EuroPython 202456 / 131
1
10
12
13
16
19
22
33
48
51
54
56
70
71
84
92
93
95
99
107
111
117
123
00:00
Network operating systemReading (process)HypermediaTrailCategory of beingCybersexElectronic mailing listTerm (mathematics)Game theoryMusical ensembleStudent's t-testControl flowPhysical lawFocus (optics)Information securityInterior (topology)Computer animationLecture/Conference
00:49
MultilaterationStaff (military)Interior (topology)Declarative programmingComputer animation
01:50
Arc (geometry)Wide area networkType theoryFile formatGame theoryNeuroinformatikSoftware testingFingerprintLevel (video gaming)MUDMusical ensembleComputer animation
02:49
Dot productOperator (mathematics)Design of experimentsSystem programmingFundamental theorem of algebraHoaxInformationDigital filterContent (media)Asynchronous Transfer ModeFraction (mathematics)Model theoryCompilation albumVariety (linguistics)Point (geometry)Coordinate systemAreaComputer programmingExponential functionMorley's categoricity theoremThermodynamisches SystemMusical ensemblePhysical systemInformation securityOperator (mathematics)Pattern languageIntegrated development environmentProcess (computing)Natural numberCondition numberProduct (business)Machine visionPhysical lawDecimalMathematicsMultiplication signStatisticsVotingChemical equationDeterminismoutputInformation privacyState of matterVirtual machineInternet service providerCartesian coordinate systemWeb pageComputerFundamental theorem of algebraAuthorizationSet (mathematics)Fuzzy logicModel theoryView (database)Goodness of fitFinite-state machineDecision theoryDifferent (Kate Ryan album)AlgorithmPlanningStatement (computer science)Level (video gaming)Graph coloringAdaptive behaviorBridging (networking)InferenceProjective planeVirtual realityExpert systemElectric generator1 (number)Range (statistics)Task (computing)Computer animation
11:14
System programmingFundamental theorem of algebraInformationDigital filterModel theoryFraction (mathematics)PRINCE2LogicGame theoryVideoconferencingSoftwareCategory of beingReading (process)SatelliteTelecommunicationMusical ensemble1 (number)Thermodynamisches SystemModal logicPhysical systemPattern recognitionBiostatisticsComponent-based software engineeringSystem identificationSystem callProduct (business)Diallyl disulfideAreaSystem administratorTranslation (relic)CyberneticsNatural numberMorley's categoricity theoremCycle (graph theory)Orientation (vector space)Human migrationCoordinate systemMoving averageFilter <Stochastik>Maxima and minimaCategory of beingUniverse (mathematics)Address spaceContent (media)Mechanism designOperator (mathematics)SoftwareTheory of relativityInternet service providerAuthorizationVideo gameInformationLatent heatDecision theoryReal numberReal-time operating systemSound effectPhysical lawRight angleRemote procedure callData structureFundamental theorem of algebraCASE <Informatik>Point cloudDampingGame controllerService (economics)ChainFuzzy logicDatabaseDependent and independent variablesModel theoryDivisorRule of inferenceMultilaterationElectronic mailing listFunctional (mathematics)Computer animationLecture/Conference
19:52
Content (media)Model theoryInformationPlastikkarteRootkitThermodynamisches SystemModel theoryPhysical systemContext awarenessWave packetProfil (magazine)Software testingNatural numberAreaState of matterConformal mapShared memoryAuthorizationData managementCASE <Informatik>Product (business)Public key certificateDeclarative programmingRegular graphOpen sourceMoore's lawInformation privacyException handlingGame controllerPosition operatorSound effect2 (number)MereologyProcedural programmingFrequencyCodeForcing (mathematics)Cartesian coordinate systemTotal S.A.FLOPSPoint cloudMaxima and minimaSoftwareLatent heatMeasurementContent (media)Uniform resource locatorOperator (mathematics)Line (geometry)Right angleTerm (mathematics)Software developerLogic gateMorley's categoricity theoremCodeWhiteboardStandard deviationDigital watermarkingGroup actionCondition numberDifferent (Kate Ryan album)Video gameSource codePhysical lawMusical ensemblePoint (geometry)Mathematical analysisDiallyl disulfideFitness functionInformation securityVirtual machineCategory of beingInterior (topology)Regulator geneComponent-based software engineeringComputer animationLecture/Conference
28:28
WhiteboardCivil engineeringInformation securityPhysical systemPC Card2 (number)FingerprintSystem administratorAuthorizationProduct (business)Exception handlingTowerAreaAutonomic computing1 (number)Data managementBiostatisticsState of matterElement (mathematics)Point (geometry)Thermodynamisches SystemPhysical systemImplementationDecision theoryProfil (magazine)Rule of inferenceSpacetimeCodeCodePhysical lawStandard deviationThermal conductivityCentralizer and normalizerGame controllerDesign by contract2 (number)Component-based software engineeringOffice suitePredictabilityService (economics)Procedural programmingMultiplication signBitComputing platformSampling (statistics)Software frameworkField (computer science)Ultraviolet photoelectron spectroscopySheaf (mathematics)InformationRegulator geneInformation securityMathematicsSoftwareCompilation albumINTEGRALSystem identificationPosition operatorPhysicalismMobile appSpeech synthesisMusical ensembleInterior (topology)Regulärer Ausdruck <Textverarbeitung>Independence (probability theory)Enterprise architectureComputer animation
37:03
Euler anglesDigital object identifierAdvanced Boolean Expression LanguageComa BerenicesInformationRevision controlSource codePresentation of a groupLocal ringPhysical lawComputer animation
37:45
Physical lawMusical ensembleOpen setComputer animationLecture/Conference
38:21
SatelliteTransport Layer SecurityIndian Remote SensingDesign of experimentsComputer engineeringStorage area networkMereologyMusical ensembleCategory of beingGroup actionInterior (topology)AreaAugmented realityGame controllerWave packetMultiplication signModel theoryPhysical lawPublic key certificatePhysical systemMobile appNeuroinformatikLevel (video gaming)Operator (mathematics)CodeRule of inferenceProduct (business)CountingCASE <Informatik>Fuzzy logicAuthorizationBackdoor (computing)Process (computing)Content (media)Expert systemInformationContext awarenessRight angleSoftware developerDigital watermarkingGoodness of fitWordRoundness (object)Thermodynamisches SystemFocus (optics)Loop (music)Speech synthesisElectronic mailing listInformation privacyLecture/ConferenceComputer animationMeeting/Interview
Transcript: English(auto-generated)
00:05
Hi, I'm Martina, and this is Welcome to Injustice for a Aisle, which looks much better written down than it sounds spoken out loud. But it was inspired by an album of a metal band. I'm not going to name, but yeah, maybe you know it, and so the talk will have a track list according
00:23
to the album. Okay, I'm your game master for today. I'm still a student of law because I had to take a small break to become a mother of two beautiful daughters. And now I'm back at the university, and my key area of focus is the media law, which includes an intellectual property law,
00:42
but also data security, cyber security, IT, and AI. Okay, my first term paper was actually about AI and the declarations of intent. So maybe that is why the AI debate is kind of an ethical one to me. And I will try not to get too philosophical
01:01
and keep my own opinion in check. But yeah, we can talk about that later. So let's go on our novel quest. Grab your swords and your staffs and your bows and 10 feet of rope, which are always very important, and let's venture forth, or not.
01:22
This is on? Yeah, this thing is on. I have tried turning it off and on again. That much I know. Spacebar. It doesn't work as well. Oh, no, I'm so sorry.
01:45
That's a great start, isn't it? The best. The best, yeah. They're not moving forward, the computer is not. In our test runs, it worked with our TV. I don't know why it doesn't work now. It worked on my computer.
02:01
But I'm using arch links, so what do I know? So I might tell you that he was the one setting my computer up, so. Okay, so, thank you.
02:20
Okay, you found that strange map, and yeah, on the map there's almost hidden by a shriveled mud bed 12 stars on blue ground. And the map is leading you right through the swamp of lobbyism. Because to understand what the AI Act is, we have to know how it came to be.
02:42
And yeah, we need to know about legislative decisions and for that I will split that up into the four big Ws, five big Ws, sorry. Okay, so, of course, who? The member states of the EU. We have different nations with different cultures, histories, values and worries, and they need to find a common ground,
03:01
but also kind of find a common ground with non-EU players. But more about that later. We also have our experts. Those are the ones that are often heard and then overheard again and dismissed. And I guess they want to bash their heads in at some points. But yeah, at least they are included in the process.
03:20
We have our NGOs, which might have the same problem. And yeah, those are like, for example, Amnesty International or something like that, which were not included in this AI Act. But to give you an example of NGOs, and least but not least, we have our lobbies, which are huge companies
03:40
and they will always tell you about how data protection is bad for your business. For example, they did that with the GDPR. And yeah, they're always threatening government with their visions of impending doom and poverty and whatnot. So government often gives way to that. The where?
04:01
The EU. The, of course, the EU. But also we have a view to the US because they are drafting a quite similar law to our own and they are not done yet. But yeah, we kind of work together with them. Also, we do have other global players as well,
04:22
like Asian players. They want to export their goods into Europe and so we have to view them as well. So the when? The AI Act kind of started in 2018 with a strategic paper and a coordinated plan for AI and then it went on with some experts, some more papers, some drafts
04:44
and plenary sessions. Then there was a final proposal and this year we had our final draft that was approved in May, I guess, April, May. And we guess that it's been publicated in August, somewhat around August. And yeah, after that, 20 days later, it will enter into force.
05:05
Yeah, that, we will not be done with AI acting after that because the act is supposed to be observed and evaluated constantly. So and within the process of AI acting, we are still making a lot of changes in very little of time.
05:21
So this might lead to unclear legislative text, which can then lead to loopholes or contradictions leading to dismissals, which wouldn't be very good. We just have to wait and see. I will say that very often. Also, there's the fright of the voter looming around Congress because if the government compilation changes too much,
05:42
like to the right wing area, it may be impossible to get some laws done. So it's kind of a now or never approach. It was with the GDPR as well. The what is a lot to chew on. It's currently 224 pages of legalese
06:02
and yeah, it's mostly addressing operators. We will come to that later. It will be addressing deploys as well at some points, but yeah, they are almost the same, mostly the same thing in this context. Yeah, it wants to create a balance between interests and the interests are, of course, the citizens.
06:22
It wants to give a colorful bouquet of protected fundamental rights, but also be a pillar of economy to build bridges to other global players, but also within the EU. Okay, now we have the why. This AI Act is a pioneer project
06:41
because it's not only the first law about AI, but also it's the first time the law was done completely for more than one nation, for more than one state. Before there were already laws done, but they had to be harmonized like the GDPR, but now it's new for all of us.
07:04
So it has a multitude of different regulatory objectives, of course, and we also wanted to harmonize law within the EU, but also globally because there are certain themes of trustworthy AI. It should be fair, reliable and safe, private and secure,
07:24
inclusive, transparent and accountable. You know that better than I do. It also is supposed to equal the competition within the EU players, but also with outside players because they want to import, export products into EU,
07:40
and it's supposed to balance innovation and risk, and for that, through that protects its citizens. It's supposed to give transparency, security, confirmability, and it should guarantee environmental friendly and non-discriminating technologies and environmental friendly would be really great if that happens.
08:01
Okay, so hopefully you didn't get your feet stuck too often in the swamp. I know it can be difficult to wade through it, through all the legalese, but now we have to press on and find our way through the mists of legal uncertainty and might even clear them. We lawyers love our definitions, so I will take you through a few of them.
08:23
The first is persons. You may find some definitions a little bit strange because the legal sense is another sense than it would be for you. But yeah, that's just how it is. You have to deal with it. So first there are operators or providers. There is a slight difference,
08:41
but not really within an AI act. Those are the people who place AI systems on the market or put them into operation. Next are the deployers. Those can be natural or legal persons. They use those systems, and to give you a few examples of legal persons, those are like administrative offices,
09:03
associations, clubs, yeah, public authorities. Next, the AI systems. Okay, the elders of the European Union are making quite a lot of fuzzy laws, and they're doing that on purpose because fuzzy laws include different conditions,
09:20
jurisdictions, and in this case, different kinds of computer programs. That's why they came up with a risk-based approach. We will get to that later. And please, let's not fight about AI. There's no definition for AI. There's not even a definition for intelligence and psychology. So there's not a possibility to have a definition for AI.
09:41
This is basically AI for politicians. So see it as sophisticated if statements if you want to. Okay, so AI systems should be machine-based systems designed to operate with varying levels of autonomy. They may exhibit adaptiveness after deployment,
10:01
infer from input how to generate output, such as predictions, content, recommendations, decisions, and they can influence physical or virtual environment. I guess you didn't know. Okay, next is GPII. This must be completely new to you as well. They are trained with a large amount of data,
10:21
displaying significant generality, and are capable to perform a wide range of distinct tasks. Also, they can be integrated into a variety of downstream systems or application. Sorry for reading that, but this is just definitions. AI models without the GP, those are programs
10:41
that have been trained on a set of data to recognize patterns or make certain decisions without further human intervention while applying different algorithms to relevant data inputs. Now we have GPII systems. I'm sorry if that's wrong. Systems that are based on GPII models that can serve a variety
11:00
of purposes, used directly, or integrated into other AI systems. And now the categorization of AI systems. I took a graphic from the EU commission because I liked it quite much. You see, you have your, whoopsie, oh no. Okay, this should be laser, I'm just gonna leave it out.
11:21
Okay, this is supposed to be your soil if you want to. And on that, we built our pyramid. You even have some examples. Now there are more examples for like minimal risk AI systems, which is a good one. But yeah, you can see how this works out. Okay, so I hope we cleared up the mists
11:41
and now we have to roll up our trouser legs and start ascending mount risk. First, we are at the base, of course. If systems could be used otherwise than minimal risk AI systems, it really depends on the intention and the actual use of the system. Fuzzy logic right there.
12:01
You will have to fight with that. Okay, if the system does not pose any risk for people and has a narrow scope of responsibilities, it's a minimal risk AI system. And there you can find it. Some examples are AI enabled video games, but also spam filters, ad blockers, translators,
12:23
and like sorting software. If you have a software sorting screws by size or something like that, this would be a minimal risk AI system. Next is the mountain flank or slope, which is transparency risk. Those are AI systems intended to interact
12:43
with natural persons or to generate content or to manipulate content. And so that the system would appear as a natural person or the content would appear as from a natural person. Well, it feels like a huge step from minimal risk to a transparency risk
13:02
because from there it's way easier to Python. No, I'm sorry, to slither into the high risk category. And also there are, there could be systems that are actually high risk, but if they are only used for subordinate auxiliary work, they would be considered as transparency risk AI systems.
13:25
Yeah, now we come to our saddle. Yeah, the key factors. Your system is not considered high risk if it's not posing significant harm to significant risk to harm people, even if it falls in the later list of areas,
13:43
but okay, fuzzy. It's based on the intended purpose, the function performed, the specific purpose and modalities. Systems that can harm the health, safety or fundamental rights of people are considered high risk systems.
14:01
Okay, so first are the systems integrated as a safety component of a product. You have medical equipment, but also toys, elevators, this all could be the case. The next one are systems used in specific areas like administration. This could go for migration, border control and refuge,
14:22
also for criminal prosecution, juridical aid or substantial public or private services and benefits. The next is critical infrastructure. If the system is used there, like for waterways, public transport or electricity, it's also high risk.
14:40
Then we have education, university and schools, of course, and employment relations or self-employment. We have finally reached our mountaintop and here we don't even talk about systems anymore, but we are talking about, as you can see, prohibited AI practices, because a monkey with a stick could run the system
15:01
in a harmful way, we all know that. You have to roll on your stamina again because there's another impenetrable wall of text you have to run out. First we have manipulative systems impairing informed decision-making. Like maybe you saw the Communication Congress last year, Ife Volfnager talked about a GP AI system
15:24
recommending, I'm so sorry, antidepressants for people expressing low spirits that would be impairing informed decision-making. Also system exploiting vulnerabilities,
15:40
systems classifying people like your social scoring in China, criminal risk assessing, facial recognition databases. Yeah, this is a fun one because if you think this is too good to be true, you are actually right. This is only for untargeted scraping of the data.
16:02
If you're scraping with a target, you're fine. Then systems inferring emotions in workplaces or educational institutions. Yeah, biometric categorization to deduct personal data. This goes for sexual orientation, beliefs, ethnos.
16:21
If the data is impersonal, the system would be fine. Also real-time remote biometric identification in public areas. And the last one is systems running practices prohibited in other EU laws. We will have to wait and see what those laws could be. But they're most likely something like
16:42
unfair commercial practices, like those that are blacklisted within the EU. Okay, now we are floating on our cursed cloud of cybernetics down to the ground. We do have roles. Providers and operators are mostly the same
17:01
and they are mostly companies. But they're also, okay, sorry. Okay, and the act addresses mostly the providers and the operators. I would like to see them as our tanks because they are mitigating the damage, they are getting all the damage. Yeah, and those are the ones that are, yeah,
17:23
like making the software. So you, it would be you as well, you developers. Next is our deployers. They are not always addressed, but only sometimes. So they are like our strikers. They are the ones, yeah, getting this software to use it
17:43
or for others to use it. So they have to get the target and assess the target. So this is why I call them strikers. And I would love to call the government our healers because they will have to do everything right, all the damage right that had happened.
18:01
Yeah, there are also many uncertainties like regarding the chain of production and the value chain. Will everybody along the chain be liable for the system or not? Even downstream operators, just modifying or tuning the system will have to wait and see.
18:23
And the responsibilities. Yeah, you have to classify your system. This is your first responsibility and always your first step. You have to, of course, comply to the rules that are taking effect on your system. And you have to do that before the marketing and during the life cycle of your system.
18:41
Also, you have to overlook and evaluate your system. And if it changes, you have to get back to step one, unfortunately. But only if it changes too much, it would fall into another category, that is. Yeah, you have to know your authority, which may be kind of impossible at the start, because the EU wanted slim structures
19:00
and as little bureaucracy as possible. So they have to create everything from scratch. And there may be a lack of coordination mechanisms between the authorities at the start. But we will grow into this. Yeah, and you have to keep your laws in check, of course. You always have to do that.
19:20
Every employee that has to do with your system also has to have AI competence. Okay, so there are certain obligations for GPAI models. They are only for the providers, not for the deployers. Of course, you have to draw up your technical documentation, maintain the documentation,
19:40
and make information documentation available. You have also to give a detailed summary of the content in use for training the models. And this is, okay. There will be an assumption of conformity for GPAI systems and minimal risk systems. So yeah, the government says you will have to self,
20:03
regular, yeah, and you will have to use those systems self-regulatory. We will see if that's good or bad, if the companies will, yeah, hold onto that or if they will just do what they want to do. There is an exception.
20:21
For example, if your software is free and open source with a positive effect on innovation, research, and competition, you don't have to do the documentation part. So there will be some exceptions. Ah yeah, and you have to give access to the copyrighted training data. Just in case one of this looks different than,
20:41
no, whatever. Okay, so, applications for your systemic risk GPAI systems. I call them GPAI Plus. Those are models with high impact capability. You have to notify the European Commission if the total computing power exceeds that amount of petaflops. I'm not gonna pronounce it.
21:01
And yeah, you have to constantly assess and mitigate posing risks, and if there's damage, you have to track, document, and report the damage and take corrective measures. Now we have minimal risk AI. You just have to be in line with other legislation. There's basically no real legislation,
21:22
no obligation for minimal risk AI. Legislation could be, of course, the GDPR, but also the product safety regulation, the Cybersecurity Act. Oh, this is a general data protection regulation, in case you'd wondered. And the Data Act, which is the Digital Accountability and Transparency Act.
21:41
Yeah, and make sure your system stays minimal risk. For the transparency risk AI, you have to inform the user about that he is communicating with an AI system. And you have to disclose that the software-created content was software-created. And how are you gonna do that?
22:02
Okay, how are you gonna do that? Government wanted you to do it by watermarks. And yeah, so the fancy term is sufficient, reliable, interoperable, effective, and robust techniques and methods to enable marking and dedication. Do with that what you want. That is the legal definition, but it's basically watermarks.
22:23
Yeah, if your system is not intended for high-risk purposes, but used by third parties, this is also an unclear context. You will have to wait and see what government does with it. So also, you have to prevent the fabrication of illegal content. And yeah, you have to publish
22:43
your intellectual property-protected training data. Okay, if you use an AI system in the workplace, you have to tell your employees about that. So that's basically it. Now we have the obligations for our high-risk systems.
23:01
And yeah, as I told you, system is automatically considered high-risk if it poses harm or if it performs profiling of natural persons. Yeah, you have to implement a rich management system, including analysis, tests, and counter-measurements in case of violations. You have to perform data governance
23:23
and establish technical documentation. And you have to give instructions to all those people that are working with the system, handling that system, so your downstream operators.
23:42
Okay, and you have to have a competent human supervisor and take your product with a CE certificate. And if you're thinking about those stickers on your washing machine or whatever, you're exactly right. That was what government wanted to have in the first place. So they said a physical certificate is mandatory,
24:01
but somehow, yeah, they woke up and now it's also fine if that's not possible that you integrate this certificate in your code. So at least there's something right. Now we have two different kinds of high-risk system areas. The first is the operational area.
24:21
There you have to get a conformity assessment procedure. This is where you get your CE certificate. Then you have to do the declaration of conformity. This should at least hold for 10 years. And then you have to get a certificate conformity through a notifying authority. And yeah, after that point, you have to be prepared for regular audits
24:44
and give full data access to the authority. Those controls doubtfully guarantee the comprehensive protection of human rights. This is clear because there are so many systems and only so little authorities to do that. The second is our product safety component.
25:03
That is pretty easy because the procedure is done via the relevant product safety testing. So it's done by the notifying authority that does that now. And yeah, this authority will have a specific examination subjects as well as it will focus on technical documentation
25:21
and the data retraining. The EU really hopes to consolidate EU as a location for IT innovation, but it doesn't want innovation to be in term as it doesn't want it to be the quick development to give it the standard, but the product safety, it wants to make the CE certificate
25:42
a standard of product safety and a recognizable player for consumers so they can feel safe. Now we have a timeline. Yeah, the act has a two year transitional period and within this period, we have half a year regarding the prohibited practices, then a year to comply to GPAI for GPAI systems,
26:04
then two years concerning the high risk systems, I know this is over the timeline, but whatever, two and a half years for the EU product legislation and yeah, that's almost it. Did I forget? Ah yeah, there will be codes of practice ready nine months after the act enters into force
26:23
and this will be given by the EU commission as well as a product AI safety guideline and an AI liable policy. This will, those two will be around in April, 2025. Okay, so when the cloud, when the cloud lands,
26:41
I tried to make it somewhat interesting, I'm so sorry. When the cloud lands softly on the ground, we find ourselves at the gate of the labyrinth of imminence, which is, which we will have to find our way through. Okay, so unfortunately,
27:01
what seemed to be a trial of wits at first, turns out to be nothing more like a picture puzzle because the tests you have to do with your systems are nothing more than compare your system with the law and see what categorization it fits in. So there are your prohibited practices in article five, your filter provision, your transparency requirements
27:20
and yeah, the article about GPAI, just read the articles and see if your system fits into them. Yeah, next, we have to test our battle tactics now since we came our way to the middle of the labyrinth and we have to play a party of checkers that was left there for us.
27:41
So yeah, companies will have to give you also authority, national agencies will have to give you a secure and controlled test environment to simulate your system under conditions that are as possible to real life as it is. Yeah, and this is for developing, testing and validating
28:01
before your system enters into the market, of course. And at least one members, there should be at least one sandbox per member state, but it's okay if member states share a sandbox. Okay, now, bureaus and authorities. Yeah, you won the party of checkers
28:21
and now you find out about the strange realms hierarchy, which are the bureaus and authorities. We have our AI board. This is working with national authorities and EU commission, it's a central contract and supporter of national authorities. We have our AI office. This is developing systems and procedures to prevent conflicts of interests.
28:42
It's also developing information platforms and to compile expertise and capabilities regarding AI systems. So this is your go to if you Google about AI compliance. It's coordinating authorities and surveillance violations and providing advice for the implementation of the AI act.
29:02
Both will be providing support for implementation of sandboxes and codes of practices. All authorities, EU and national wide shall be independent, impartial and non-biased. Also, there will be CEN and CENELAC. CEN and CENELAC are private enterprises,
29:22
but they do standardization for you and they are instructed by the commission to do so. And they will create regulation concept and sample systems. They will give you your guidances and incentives to comply and they will give out the guidelines. Maybe the guidelines will be more important for you
29:42
than the AI act itself. So we have also our national authorities, which is the national supervising authority and also the market surveillance authority. This is the important one for you because those are the central contact points for national support and the administration managers.
30:06
And each member state can appoint additional public institutions if they see the need for that. Now we come to sanctions. Yeah, those will be fines regarding to violations
30:20
of article five, also other violations and incomplete or misleading information. So yes, if your documentation isn't correct, that could lead to a fine. The highest fine will be 35 million euros or 7% of the global annual turnover of the prior business year for huge companies.
30:41
So this is if that's higher. This is the highest fine. There will be lower fines. You can see them in articles if you want to. For startups and small and medium businesses, the fines will be lower definitely because yeah, we want to give them some space to grow and adapt. Yeah, so there's no room for the consumer actually,
31:04
but I will be talking about that in a second. Now we got out of the labyrinth and now we have to cross the fiery fields of false hope. And why do I call them that? Because like, yeah, okay. If your armor only mitigates magical damage,
31:21
the fields of false hopes are, yeah, physical, oh no, only mitigates physical damage, but of course, the fire is magic, the damage will get through. The gamers know that. So it's, yeah, the exceptions are the magic attacks and the AI act is only physical protected.
31:45
So the exceptions are areas of criminal prosecution or law enforcement. Yeah, the biometric profiling in public spaces if it's done for border control or predictive profiling. Oh, the areas of criminal prosecution.
32:02
This goes for like searching for victims of abduction, preventing threats of safety or identifying subjects in serious crimes. And it can also be done retroactively. So yeah, watch out for that. Then there is biometric classification for lawful labeling or filtering
32:21
in law enforcement purposes. And yeah, products whose integrated AI systems are not in the safety components. For example, if you have an intervention app as a responsive element to improve user experience for reasons of market research, yeah, this,
32:41
in my understanding would be a pretty delicate risk area, but it's not high risk because it's not under the new approach for product safety, but no, because it's not in the safety component. Yeah, and also you can just specify your system as high risk and then you have to register your system at the authority
33:03
and maybe randomly audited. So why the AI Act if you can just register your system. But okay, yeah. And now there's another one. There are the old approach systems instead of the new legislative framework. You find this in Appendix 1, Section B. Those are products
33:23
with integrated AI systems according to the old approach. And yes, those include car trailers or autonomous driving, also medical equipment or marine equipment, railway systems, and civil aviation. So why not feel safe with that legislation in order?
33:42
Yeah, okay. We have reached the ruins of confusion, and at least I hope your confusion lies in ruins or maybe you build a castle with it. I don't know. And I don't want to debate what's more important, security or safety, freedom of individual freedom and privacy,
34:01
which weighs more, we can talk about that later. But I just want to highlight the issues because you are the brave heroes that will defend the citizens of the U with your mighty blades of zeros and ones against some horde of evil goons and evil masterminds. And yeah, so you share the burden of liability to some point
34:23
and you have to defend your code of conduct and your healthcare ethics as well. And so I will talk a little bit about ups and downs, like for example watermarks, some artists feel safe with them, others say it destroys their art. Also AI could help with propaganda
34:41
but also make it way worse with like false positives or it could limit the freedom of speech unintentionally. Yeah. There's still no liability for AI in the AI act, so we have to wait and see if private claims are done via
35:02
product safety or yeah, whatever, private damages. It's just so fuzzy. We will get our guidelines for that. But yeah, I would have loved for something in the AI act to just make sure that private claims will also be heard. Yeah, and we don't know if the sales liability system
35:24
for the companies will lead to a better world or to a worse one. We will just have to wait and see. Okay, so now we found a magical creature at the top of the runes magic tower and we will have a look into its crystal orb in an attempt of judical divination.
35:43
And there are two areas in my opinion, the changes in rules and ruling. Maybe the governmental system will grow into the AI act and it will get some bylaws and some rulings, become clearer and it will grow with AI technology as well. Yeah, maybe the fog and economy will be cleared
36:01
like AI is a product, a player, a person, not a real one of course. But as we talk about authorities and companies as a person, there are legal persons and legalese. They are not humans of course. Yeah, so because AI is making decisions for us and it would be nice to have some laws about that.
36:22
The second area is law enforcement, secret services and whatnot. Yeah, why do I say this? Because in April for the first time EU court gave way to data security breaches. I don't know if you heard about that. Yeah, but companies and governments reached those laws all
36:43
the time and courts ruled against it but they didn't stop and now the court really said okay if they won't stop, what can we do about that? So this really worries me and it could be problematic with like identification or classification software in general. Yeah, okay, you're almost done.
37:02
So there's your reward kind of. Now you can have a stop at the tavern. You can have some ale and some stew and hear the local titter-tatter for information of course are the true riches. And I will be listing my sources in the uploaded version of the presentation.
37:21
But here are some honorable mentions. In Germany I would check Heizer or Golan but internationally it's more like TechCrunch, Wired, Ars Technica or The Verge. For law especially there's Gruer and LexisNexis but you have to pay for LexisNexis so maybe don't go on there. And there is also the EU court council to yeah be posted.
37:46
Okay, so thank you all for listening and of course a special thanks to my husband and also to Dr. Simon Gerdeman and Professor Andreas Wiebe of George August University of Göttingen and Professor Dr. Joel Ballock of Corvinus University
38:02
of Budapest because they taught me a lot about AI law and law in general and they had open doors for my questions and were just so supporting so I'm eternally grateful for that. Do you have any questions? Okay, thank you.
38:23
Thank you, Martina. Okay, we'll start with the Q&A right away then. So the AI act applies only to marketed products, right? So if I want to develop Skynet at home I'm still allowed to? Yeah, this is a good question actually.
38:43
Yeah, because it's for your private use but and it talks about deployers so it mostly talks about the marketing. I would have to look that up actually if it's for private persons as well. I would say this is kind of I don't know the English word
39:05
but if you do something and this is in the law and there's something similar done which is not in the law, then you would say okay the law counts for that as well. So even if it's not in the AI act and something gets wind of that, it could be especially
39:22
if you have some profiling systems running with that. So this is just a yes and a no. You know, two lawyers, three opinions. Thank you, Martina. Any more questions? Just regarding the, so the definition of AI is a bit fuzzy
39:43
but it seems that the notion of training and data is like central and very important. So all this doesn't apply to every system that is not based on data but maybe more on like modelling or stuff like this. This wouldn't apply to that? Yes. It's those definitions.
40:03
If your system doesn't apply to those special definitions and that is why I read them to you basically, it won't be considered AI. So yeah. Yes. Thank you for the talk.
40:22
You spoke about how the AI restricts people who put these systems into place and how they use them. Does it talk at all about how like models are trained or these AI systems are designed? Not really. It's basically if you're just going to train a model, if you're just playing around and not release it
40:42
into the open, you can do basically everything. So if it's, yeah, if it's getting outside your computer, then you have to comply with the act. Thank you. Yeah, you're welcome. Is this one? Yeah, cool. First of all, thank you very much. That was super interesting if slightly depressing
41:02
on how loose it is right at the very end. Yeah, I know. So throughout this, you've talked about risk. Is that always considered risk to like one specific individual person or is it risk like to a group of people or to like a country's worth of people at a time?
41:21
Actually, it's kind of both. If you are violated, you have the right to complain to the notifying authority and you are entitled to get an explanation from them. So this is everything that's in the act. So this is why I said I would have liked to have some private liability for private persons as well,
41:43
for private violations. But yeah, it goes for a group of people very often, for minorities, for example, or groups that aren't that, yeah, that are like children, you know, that could be harmed very easily. But yeah, you as an individual count as well.
42:01
Cool. Thank you very much. You're welcome. More questions? Hi. You've mentioned watermarks for content-creating AIs. So the sole purpose is to depict this as AI content.
42:21
But the content gets created by violating copyrights in the first place, right? So yeah, I mean, that's still the debate. And that's my question now. Does the law focus on that part of like how the content in the end gets created in the beginning?
42:43
Yeah, it kind of just tipped that area by saying that if your training data is intellectual property protected, then you have to give information about that so that everyone can see and so that everyone can say, oh, no, I don't want my data to be in that system to train an AI.
43:04
But I guess this is an intellectual property protection case that you're mentioning. This is not part of the AI act that much. But yeah, there are people arguing about that. And there were people arguing about it during the legislative process as well.
43:24
So those voices are heard. But in the end, it's always going to be those lobbyists. If there's a strong lobby for intellectual property, then yeah, you will be heard. I know about the struggle because I know many artists. So, yeah.
43:42
I'm sorry. I'm sorry I couldn't give you a real answer. I have a question. I have a question about the risk levels. So if you use some general tool like chatgpt or copilot to produce code, and this code then in turn is used for something like a medical device and is used there,
44:02
how does it correct the risk level like that? It's like a human in the loop and the human is looking at the code but is putting the code into the next one. Does it still apply? Does this risk level, the highest risk level then to this human evaluated AI generated stuff going up there? I see where you're coming from
44:20
and this is why the government has experts to ask those questions. I think, I don't know if they oversaw that question because I mentioned if your system is used for example by a third party, they still don't know what to do with it. And I guess this would be kind of a similar context if the code goes into the next, I understood that correctly,
44:43
right, if your code goes into the next level area, yeah. You would have to, if you know that and if it's your code or your AI, then you would have to revalidate. But yeah, if it gets out of control, I don't know how to handle it.
45:00
I'm so sorry. It depends, okay. It depends, yeah, as always with lawyers, I'm so sorry. Thank you. So we are almost up on time, we'll take two more questions. This person has been waiting for long. So if this legislation's only come into practice in the EU,
45:20
what's stopping like a corporation just moving their operations to another jurisdiction, so like USA or Asia or somewhere not within the EU? Yeah, all those companies that want to sell their products or implement the products within the EU would have to play by those rules. And yeah, if you're exporting from the EU to the US,
45:40
maybe you, maybe US citizens will say, oh, this is the European sea certificate, so this protects my rights. And so this is why we said we wanted to have this as a global standard kind of thing, maybe to get ahead of the competition that violates rights like companies
46:01
that develop very fast, but yeah, oversee those things like data protection or whatever. So does it answer your question? Yeah, it does. Thank you. Thank you. Thanks for the speech. I have the following question. I noticed that there are a big list of exceptions,
46:22
for example, for national defense. So it seems that if any government marks the system as for national defense, they can do whatever they want. Are there any attempts to close this backdoor
46:43
in the documents? Yeah, I know the answer will be quite depressing as well. Yeah, we kind of rely on that everyone will play by the rules and that no one will use it to, yeah, will be misbehaving, but yeah, they actually,
47:06
in my opinion, can roam free like national, like military or whatever because they are so important and defending countries and whatever. So yeah, we will have to wait and see.
47:20
This is one of my major concerns and there, yeah, there are voices for someone to overlook that process and overlook like military and border control and whatever, but we will have to wait and see. I told you I would say this very often. I'm so sorry. Thanks. You're welcome.
47:41
Thank you, Martina. Thank you for all the questions. The room really came alive after your talk. I want to thank you for a great rundown of the European AI Act. I'm sure all of us are leaving more knowledgeable and informed after this talk. Another round of applause for Martina, please.