We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

AI and the End of the World

00:00

Formal Metadata

Title
AI and the End of the World
Subtitle
(as we know it)
Title of Series
Number of Parts
94
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Like any other tool, AI has its benefits and its dangers. But we need to be aware of the dangers, to reap the benefits unharmed. Everybody is talking about the benefits of AI and in which areas it could also be applied to. Only a few question if this should be done at all and which dangers might arise with widespread adoption of AI. This non-technical talk will give insights into the different dangers of AI: From Skynet and intelligent war drones to BigBrother and a benevolent but misguided super-intelligence à la "I, Robot", different scenarios with prerequisites and probabilities are discussed. Although the ultimate end-state of AI maybe 50 years ahead, most people focus solely on that, neglecting the many different dangers on the path to get there. But there are a lot of issues we need to start to address today. We are living in interesting times ... a slightly different talk.
Open sourceFreewareDifferent (Kate Ryan album)2 (number)XMLUMLLecture/Conference
Virtual machineForm (programming)Formal languageOnline chatTuring testComputer chessBitDeep Blue (chess computer)Software testingNP-hardOrder (biology)Multiplication signPlanningStrömungsdrosselGame theoryNatural languageInstance (computer science)Right angleLecture/Conference
Alpha (investment)Virtual machineGame theoryWhiteboardUniverse (mathematics)Computer animation
Impulse responseControl flowImpulse responseGame controllerConstraint (mathematics)PlanningVirtual machineMultiplication signBefehlsprozessorOrder (biology)Contrast (vision)MetreExecution unitFactory (trading post)Operator (mathematics)2 (number)Different (Kate Ryan album)PlotterRight angleArithmetic meanRadical (chemistry)Decision theoryInstance (computer science)Lecture/ConferenceMeeting/InterviewXMLComputer animation
Factory (trading post)RoboticsPairwise comparisonVirtual machineTime zoneComputer animation
InternetworkingRobotShooting methodData structureScaling (geometry)Game theoryBitPerturbation theoryVirtual machineRoboticsRepresentation (politics)InternetworkingRadical (chemistry)Computer animation
Order (biology)RobotMoment (mathematics)Lattice (order)Moment (mathematics)Order (biology)Instance (computer science)Matrix (mathematics)Real numberRight angleComputer animation
ComputerInstance (computer science)Online helpSource codeComputer animation
Range (statistics)Bus (computing)Computer hardwareLevel (video gaming)XML
Game theoryHill differential equationPoint (geometry)Computer animation
MereologyRobotControl flowPoint (geometry)Game controllerMultiplication signRoboticsDifferent (Kate Ryan album)DiagramComputer animation
Term (mathematics)AerodynamicsRobotState of matterSet (mathematics)Point (geometry)Autonomic computingArtificial neural networkTerm (mathematics)Instance (computer science)Diagram
Term (mathematics)Goodness of fitNP-hardWave packetPort scannerComputer animationDiagram
AlgorithmType theoryInstance (computer science)Computer animation
Tablet computerGoogolFormal languageHeuristicStatisticsAlgorithmFunction (mathematics)Translation (relic)Instance (computer science)Formal languageComputer animation
Digital photographyAlgorithmDecision theoryState of matterComputer animationEngineering drawing
Physical systemContrast (vision)Basis <Mathematik>Black boxPrincipal idealMeeting/Interview
Prisoner's dilemmaLengthPhysical systemWorkloadCASE <Informatik>Inheritance (object-oriented programming)GenderData structureDecision theoryComputer animationMeeting/Interview
NumberBlack boxLevel (video gaming)Physical systemComputer animationMeeting/Interview
Physical systemCASE <Informatik>SatelliteCrash (computing)RoboticsData storage deviceRight anglePower (physics)Control flowMeeting/InterviewComputer animation
Video gamePower (physics)Decision theoryWeb pageComputing platformAlgorithmMeeting/InterviewComputer animation
Matching (graph theory)Process (computing)Cartesian coordinate systemAlgorithmGenderDecision theoryMeeting/Interview
Term (mathematics)Error messageNormed vector spaceError messageHypermediaInstance (computer science)RoboticsStatement (computer science)Lecture/ConferenceDiagramMeeting/Interview
AutomationProcess (computing)RoboticsDevice driverSuite (music)Task (computing)Computer animation
Virtual machineSimilarity (geometry)Virtual machineMeeting/InterviewComputer animation
YouTubeMathematicsAverageLocal GroupSource codeArithmetic meanVirtual machineInsertion lossProcess (computing)Surface of revolutionMathematicsInstance (computer science)Computer animationDiagram
MathematicsLocal GroupAverageSource codeGreatest elementCategory of being1 (number)Chromosomal crossoverProcess (computing)Virtual machineSystem identificationWebsitePoint (geometry)Form (programming)Computer animationDiagramLecture/ConferenceMeeting/Interview
Error messageTerm (mathematics)CASE <Informatik>WebsiteProcess (computing)Power (physics)Flow separationDependent and independent variablesTerm (mathematics)CASE <Informatik>XMLProgram flowchartComputer animationDiagram
Physical lawComputer animationMeeting/Interview
Physical lawDecision theoryPhysical systemDimensional analysisProjective planeRoboticsInjektivitätShooting methodVideoconferencingComputer animation
Pattern recognitionField (computer science)VideoconferencingCASE <Informatik>Demo (music)Crash (computing)ExplosionScaling (geometry)Meeting/InterviewComputer animation
ExplosionMassPattern recognitionLoop (music)Meeting/Interview
Binomial coefficientCoalitionLevel (video gaming)Vector potentialBit rateVideo gameCommutatorMultiplication signXMLComputer animation
CAN busPoint cloudExecution unitInclusion mapVotingPeer-to-peerFacebookInternet forumInstance (computer science)Shooting methodXMLComputer animation
Rule of inferenceRule of inferenceCybersexBit ratePhysical systemComputer animationLecture/Conference
CASE <Informatik>Error messageTerm (mathematics)VirtualizationPhysical systemHacker (term)Information securitySoftware testingSoftwareGoodness of fitLecture/ConferenceMeeting/InterviewComputer animationDiagram
TrajectoryMultiplication signSpeciesMereologyProduct (business)Error messageInstance (computer science)DiagramComputer animationMeeting/Interview
Error messageDecision theoryCASE <Informatik>Sound effectProduct (business)Term (mathematics)SpacetimeSource codeLevel (video gaming)Curve fittingSpacetimeTotal S.A.Decision theoryVector potentialUniverse (mathematics)Physical systemLevel (video gaming)Instance (computer science)Computer animationXML
TrajectorySpeciesVideo gameChemical equationInternetworkingInternetworkingMeeting/InterviewDiagramComputer animationXML
Exponential functionPower (physics)ComputerCalculationBit rateVolumeChannel capacityCurveMultiplication signGraph (mathematics)Potenz <Mathematik>CalculationRight anglePower (physics)Computer animationDiagram
ComputerBit rateCalculationChannel capacityVolumePotenz <Mathematik>Arithmetic meanNumberPower (physics)DiagramComputer animation
Workstation <Musikinstrument>Level (video gaming)CASE <Informatik>Level (video gaming)Workstation <Musikinstrument>Right angleVideo gameMeeting/InterviewDiagramLecture/ConferenceEngineering drawing
AlgorithmPlastikkarteArrow of timeAlgorithmPhysical systemPlastikkarteFreewareArithmetic meanNP-hardJSONXMLComputer animationLecture/Conference
Maxima and minimaInclusion mapBlogContext awarenessNP-hardSoftwareCodeBlogMultiplication signBlock (periodic table)Lecture/ConferenceMeeting/InterviewComputer animationXML
CASE <Informatik>SpeciesTerm (mathematics)Matrix (mathematics)BitEqualiser (mathematics)AlgorithmCASE <Informatik>GradientArtificial neural networkField (computer science)Computer chessRobotSelf-organizationLatent heatPhysical systemComputer scienceDuplex (telecommunications)GoogolChatterbotTerm (mathematics)Direction (geometry)Virtual machineNatural languageUniverse (mathematics)Symbol tableContext awarenessNoise (electronics)QuicksortRight angleInstance (computer science)StatisticsPresentation of a groupTorusComputer animationXMLUMLLecture/ConferenceMeeting/Interview
Open setMusical ensembleWage labourParticle systemPhysicalismMultiplication signComputer animationMeeting/Interview
Open sourceFreewareEvent horizonComputer animation
Transcript: English(auto-generated)
So, thank you for coming. Just a short question at the beginning. Anybody here who is not a German speaker, who doesn't speak German? OK, if you want to, we can switch to German. Is that OK? Is it fine with the... OK.
As you can see in German, it's a little bit different. Now, I'd like to introduce you to K.E. and the N.E.W.E.L.D. So you can hear me. As a small example, I'm from the Hollywood of Breitung, from the second country, Malmitkebracht.
I just asked, are you English? I'm not German. OK, so we switch to English, no problem. Is this with sound?
Oh, the sound doesn't work. OK. Well, if it doesn't, we can just skip it. No problem.
OK, let's just skip it. It doesn't matter. So, you probably all know many Hollywood films about A.I. and you just saw Stephen Hawking talking about the topic. So when we talk about artificial intelligence, what do we mean? Like, what is intelligence really? What are we talking about?
And the problem is that nobody really knows. So science hasn't come up with an agreed upon definition what intelligence is. And so it's a bit problematic to talk about artificial intelligence. There was a famous guy called Alan Turing, and he came up with an idea,
being that nobody knows what intelligence is, but we all agree that humans are intelligent. So if humans are intelligent and we can't distinguish a machine from a human, then the machine must be intelligent, whatever that means. So this is the way he came up with the Turing test. At the time, he defined it as a chat competition or chat test
or a talk-to test, but this is essentially the idea behind it, right? If we can't distinguish a human from a machine, the machine has to be intelligent. And it turned out to be some form of joke in the A.I. community,
being that whatever problem we define as to be an A.I. problem, once it's solved, people decide that it's not an A.I. problem anymore. For instance, in the 50s, 60s, 70s, they said that chess is a very complicated game, and in order to be able to play chess,
the machine would have to be very intelligent, like strategic planning and everything. So if a machine could ever play chess, then this surely would be an intelligent machine, right? So IBM came up with Deep Blue.
And as soon as people realised how Deep Blue works, they said, no, that's not intelligence, that's not what we meant. And then they said, okay, language. Like, language is a very hard thing with subtleties and all the ambiguities and jokes and all of that, irony and stuff.
And if we can get a machine to understand human language, then this surely needs to be intelligent, right? So again, IBM came up with Watson this time to play Chappadille. And alas, people said, okay, this is like, it's a narrow form of intelligence,
but it's not what we meant when we said artificial intelligence. And may that destroy, sorry, qualify. And the latest frontier that fell was the game Go. So people said, okay, there are more possibilities of the board
than there are stars in the universe. So if a machine can play Go, then it surely must be intelligent. And the same. So now we have a machine that can do that. And still, we don't say it's what we understand
as a general intelligence. Okay, so where are we today? Well, as I just said, we have intelligence. So it's doing some things, but it's not intelligent in a general sense. So the AI we have today lacks empathy, has poor impulse control,
has problems with planning and foresight, poor behavioral constraints. So actually, that is the textbook definition of psychopath. Maybe that's the reason that people think that we will end in war. Like this is the famous movie plot.
Everybody knows the documentary, right? Terminator. And this is sure how it's going to be, right? Well, I mean, it's a great movie plot and it's very, very intense. But is it realistic in any way, conceivably? Well, if you think about how we compare against machines,
like we would in a war, our neurons, the biological neurons, operate at 200 hertz. That's how fast the individual neurons in the brain can switch. Well, that's what we have today as CPUs.
Then the electronical signal in the nerve travels with about 120 meters per second. Meaning that, for instance, if you want to throw a ball, you have to give the order to let go of the ball
when your hand is still there. Because in the time your hand makes that movement, you cannot wait until your hand is here for your brain to send the signal to let go of the ball, because it would be too late. When you make that throwing movement, your brain has to send the signal to let go of the ball when your hand is still here.
That's why it's so hard to learn to walk, throw balls, play tennis, do anything like that, because your brain essentially has to anticipate what is going to happen and send the according signals and commands when it hasn't happened yet.
As humans, we can't wait until the ball is here. We have to decide what to do before the ball is here in order to be able to make the movement. In contrast, obviously, to the machine. Actually, this is very interesting. If we say that the operation is comparable,
which it currently is not, because with a machine we simulate a neuron, but if it was comparable, when for you one second passes, because of that vast difference, for the machine, 63 years pass, right?
So, if we assume we operate at the same speed of thinking, then when one second for you passes, the machine has 63 years' time to make the decision what to answer, for instance, or how to dodge the bullet in the war. And if you come to combat units
to produce a combat unit, like a human soldier, it takes about 16 years, and in a factory, it takes about several hours. So, if we ever get into a situation where we have a war against machines, there's no chance that we will win.
So, I just want to make that very clear. Another comparison, in factories like a couple of decades ago, we had these cages that made sure that human workers weren't harmed by the clunky robots. Today, robots can analyze the human
and anticipate its movements before the movements become conscious for the human. So, the human doesn't even know that he wants to make a certain movement, and the machine can already react to that. So, that's the time difference we're talking. And, of course, also, it makes for a nice foe if the enemy has roughly human size
and looks like a human, robot-wise, but if you want to win against humanity, it absolutely doesn't make any sense. Like, if you want to win against humanity, you create robots that are like nanobots. You can't shoot at them, right? How do you kill that? So, you dissolve structure,
you dissolve humans on a nanobot scale, which is that we don't want the war, okay? As soon as we get to the war, the game's over already. This is the reason that famous people like Elon Musk when, in 2017, things got a bit hot with North Korea, he tweeted that we shouldn't be concerned about North Korea,
we should be concerned about AI, because, in the end, machines will win. So, now I want to bust a couple of myths. Some people think that only Luddites worry about AI, and, as you just saw, Stephen Hawking, Elon Musk, Bill Gates,
many famous researchers think that AI should be worried about. The people, like with Terminator, think that the problem is that AI might turn evil, but this is actually not the problem. The problem is, as soon as we are not 100% aligned,
it's a problem, and I'll get back to that later. And, of course, not robots are the main concern. We don't need a body representation. As soon as the AI is connected to the internet, it can just shut us down or do whatever it wants. So, here's a joke. You're the most advanced robot,
and out of my fear of the future, I order you to destroy all human-created, unfriendly intelligences. Soon, moments before all humans were killed. So, the problem is, what does the AI understand, if we give it an order? For instance, if I say, make me happy,
what does it mean to make me happy? It could mean, you know, create a matrix and have me be the rock star, while all other people are simulated, and I'm the only real person in there, or just, which is much more efficient, just put my brain into serotonin, right? So, I'm happy.
But this is surely not what we want. So, the problem here is, and this was already talked about in the Greek mythology, King Midas, if we make a wish, we must make sure that this wish is what we actually want. So, King Midas, for instance, he wanted that everything he touched would turn into gold.
The problem was that he couldn't eat anymore because his food turned into gold, he couldn't drink anymore because his, you know, wine turned into gold, and eventually also he touched his daughter, which promptly turned into gold. So, this surely wasn't what he intended. So, this is the same thing essentially could happen to us.
Whatever we wish could be granted to the latter. If we think, if we say, okay, we just need to teach the AI common sense and morality, well, turns out that common sense is not as common as we think, and morality is a very different thing for each of us,
and also for people living in China or Russia or whatever in the world. So, as long as philosophy doesn't provide us with a flawless and consistent morality, which it hasn't so far, this is not the way to go.
This won't help us. Okay, let's talk about how intelligent is an AI going to be. Well, if you take the intelligence staircase, so to say, and you have an ant and a chicken and a monkey, and here you have the humans, and on that staircase you have all humans from the dumbest guy in the village to Albert Einstein, right?
So, all on the same level. And then you take an AI and you let the AI improve itself. You let the AI, which we call seed AI, and then intelligence explosion, okay? So, the AI improves its own hardware and improves its own software, and within minutes, days, months, whatever,
you have that, okay? We can't even fathome what an AI that would be as intelligent as that would want to do. We have no way of understanding. We have no way of forecasting, of telling.
And if the AI wanted something, it could just happen that we'd be in the way. So, it doesn't need to be, as I said, it doesn't need to be bad or malicious. It doesn't need to hate us. So, there's an example. If we want to build a road, and there's an ant hill where we want to build a road, we don't hate ants.
We just want to build the roads, goodbye ant hill, right? So, the same could happen to us. If the AI wanted to do something, anything, doesn't matter, and we happen to be in the way, well, bad for us. No bad feelings intended, right? So, yeah.
And the question is, when will that happen? So, here is the point where we create a real general AI that can improve itself, and then we have that intelligence explosion. And we're here. We don't know where we are. We don't know if this is in one year, in five years, in 50 years, whatever.
We don't know. It will happen at some point. So, all of the scientists agree that this will happen. They just disagree on when. But in the meantime, I want to talk about something different, which is, you know, AI becomes self-aware and rebel against human control,
and AI becomes advanced enough to control unstoppable swarms of killer robots. Okay, this is the time I now want to talk about. Okay, essentially, I made a short graph, so we are here today, tomorrow, whatever that means, one year, two years, and then short term, whatever that means.
So, I'm very vague here, because I'm not really interested when it actually will happen. I just say that it will happen at some point, whether this are five years or 10 years or 20, doesn't matter for me. And then at some point, we will have, as I just said, autonomous artificial superintelligence. So, and we are here.
So, what's the state of AI today? Okay, it didn't go exactly as planned. AI today works, more or less, but still has some flaws.
For instance, it has problems distinguishing between puppets and chicken wings. Puppies, sorry. So, this is a very hard problem for AI. One is a chicken wing and one is a puppy. And today, we get many interesting news,
like AI is better at finding cancer than humans. Well, it turns out that's actually not such good news, because pigeons are better than finding cancer than humans. So, if you take a pigeon and train it with MRI scans for 15 days, it beats you and doctors.
So, it's actually not that hard to achieve, right? And this is where we are with AI today. And still, we have some problems. We already have some problems. And the main problem is bias. What is bias? Actually, bias, we have problems with bias, actually, without AI.
So, this is a racist soap dispenser. So, as you saw, for a white hand, it gives you soap, but for a black hand, it doesn't. And obviously, this hasn't to do anything with AI, it just exemplifies what I mean with bias. So, the people who created that soap dispenser didn't think about black hands, right?
They just, you know, they tried it, probably, and it worked for them. So, they assumed it will work for everyone, which it doesn't. So, and the same thing will happen with AI. As we take AI and put it in everyday, you know, things, and put the algorithms everywhere, you will see a lot of that.
You will see that, you know, things don't work for certain types of people. For instance, the Google Translate. So, there's a problem that in the Turkish language, there's no genders, so everything is neutral. And in the English language, there is no neutral.
So, if you translate, he's a babysitter and she's a doctor, into Turkish, and then the exact same thing, you translate back, the algorithm has to decide what the output's going to be, okay? There's no it, so you can't say it's a babysitter, that doesn't work. So, you have to decide whether she or he.
And the algorithm, you know, using heuristics and statistics and bias, essentially, decides that the babysitter is more likely to be a she, and the doctor is more likely to be male. And this is, you know, I mean, that's not too bad, right?
It's just a translator, so it's not too bad. But it's just an example of what we will see more of. For instance, if you search, so this is fixed now, but if you searched for three black teenagers, you saw mugshots, and if you searched for three white teenagers, you essentially got stock photos, also clearly biased.
And today, in America, if you apply for a loan, you, so the person in front of you won't make the decision. An algorithm will make the decision and tell the person what he should tell to you. Okay, so this is actually, this is happening today. This is not in two years or in five years.
This is the state of today. If you apply for a loan, the algorithm will take into account where you come from, what your name is, where you went to school, and based on that, make a decision about whether you get a loan or not. And I didn't know that, but in the US they have a system
where they try to figure out who's the worst performing teachers and then they let go of them. So the lowest performing 5% of all teachers are essentially fired. And so this happened for a teacher who was nominated by her,
I don't remember the word, but the guy who's in charge at the school, the principal, and so he said that she's a very good teacher and then the system decided that she should get fired. And this is a problem because this is the reason
that the general population learned of the fact because this is a stark contrast. But this happens on a regular basis. There's a system that's a black box that you can't look into and that decides whether you get fired as a teacher or not. And this is today, not sometime in the future, today in America.
And also, because the judges have so much work to do and they can't cope with their workload, they have a system that supports them. And this system makes a suggestion about the length of the prison sentence and about whether you should get a bailment and to what amount.
And also that system is an AI system that's learned from past cases and based on your gender, your name, the place where you live, the place where you went to school and who your parents are and stuff like that, the system decides how long you will go to prison.
And if you're a judge, you can rule against that, you can overthrow the decision, but then you have to give a reason why you did that. So very few judges actually do that. So essentially, AI decides how long you go to prison in America, today.
And the problem is that AI is a black box, so you can't question it, you can't look into it, you can't say, why do you come up with that number or why do you come up with that suggestion? It just happens, right? And you can either agree with it or you can ignore it, but you can't question it.
Yeah, and that leads to the next problem, do you obey technology? Like, who of you obeys technology? Okay, if you're driving a car, who is ignoring the traffic light? Nobody, right?
Five years ago or 10 years ago, if you wanted to go somewhere where you've never been, you would look it up on the map. Today, I get in the car, like myself, I get in the car and say, okay, bring me to my holiday. And I don't, like, it's a thousand kilometres and I never looked in the map even once. I just totally trust the system that leads me there, right?
And the next thing is that I don't even have to drive anymore. So the system drives for me. So I totally, this is what I want to say, this is a slippery slope.
We start with convenience and then eventually we end with life support. So the system become, we depend on the system in such a strong way that without the system, like if our children don't learn how to drive, and I think that this will be the case, then if there's a breakdown, if the satellites crash or something,
then they won't be able to go anywhere, right? And, I mean, this is just driving, but this also happens for other important things, like in my local village, there's a pharmacy that just recently installed such a robot. So they don't have to go and grab things from the storage.
So it stores the items and it's very efficient and it does it all by itself. But if the power breaks down, my pharmacist cannot give me my medicine. Right? Because she doesn't know where it is.
She doesn't, I mean, she even told me so. She has no clue. So the system is fully autonomous and she doesn't need to interfere with it, which is great. But as soon as it breaks down, there's no power, she can give me my medicine. And also with many more like life decisions,
decisions that are important for your life on a big deal, like who are you going to date? Like if you rely on Tinder and all of those platforms, they make a suggestion. And of course you can choose, like you choose between the top three or five or top 10.
But what if your perfect mate is in top 100? As you know, nobody goes to page three on Google, right? So if you're on the top match, or at least in the top three, you're not going to find her or him. So this is a life decision where AI supports you.
But you don't understand the algorithm, you don't know what the decisions are. So these are companies that make the decisions and actually not in your own best interest. Because usually what you do is that you pay them by the month,
which means the longer you're looking for your perfect partner, the longer they'll get money from you. So they have no interest at all in giving you the best match on the first spot, because then you pay once. And also for jobs, where are you going to work at? So LinkedIn makes job suggestions, where should you work at?
And even on the employer side, they get a bunch of applications, and then LinkedIn or other algorithms help them decide who to employ. So there's a lot of algorithms and situations
where AI is already making the decision today, and we don't understand those decisions. And if there is bias at play, and you have the wrong name, or you went to the wrong school, or the wrong gender, or whatever, ethnicity, then, yeah, bad for you.
Okay, this is today. Like, it's serious, but not too bad in a way. Because as the intelligence of the AI increases, errors are going to be more severe. They're going to have more severe consequences.
And we're going to face unemployment. This is already talked about in the media. So what I want to say is that, for instance, if you have self-driving cars, and the AI in the self-driving car makes an error, then maybe you die, right? Probably the AI is better at driving than you or than most humans.
So there will be less casualties with AI driving. This is a broadly accepted statement. But still, every error that is made could potentially end a human life.
And, yeah, robots will take our jobs.
So in 10 years, we won't need any more drivers, any more cab drivers, any more whatever. This is already partly happening. Like, Amazon already has a lot of workers in retail.
So, as you just read, with 50% chance, AI will outperform all humans, or humans in all tasks, which means there are essentially no jobs left with a 50% chance.
And we had a similar situation already. From 1811 to 17, when the mechanised looms were created, and the Luddites started to destroy them.
And what essentially happened was that a lot of soldiers were sent to end the revolt, and that destroying a mechanised loom was penalised with death. So people were killed for breaking machines.
And perhaps this will happen again. Like, we have, 200 years ago, like 90% of all humans were working in agriculture, meaning, you know, farming and herding. And today it's like less than 1%. And we had 200 years to accommodate to the situation, to that job loss.
Like, it happened before, right? So, machines took over human jobs, it happened before. Maybe we can just cope with that again. But we'll see, we don't know yet. And also in between, there were, like, the French Revolution and stuff.
So, very drastic changes happened in between. And it could be that those drastic things might be in the future for us as well. Some of that we already see. For instance, the amount of money or the amount of property owned by the top 0.1% in the US
is soon to be more than the rest owned by the 90% of the bottom of the US. So, this crossover is going to happen. And, yeah, the income also is increasing at the sea level the most,
which means that jobs, we're going to have less jobs, and fewer people, the ones with the machines, essentially are going to decide what's going to happen. And also there's another point,
which is that jobs are a form of identification for humans. If you lose, like, there's this dating site in Germany where you don't see the name, you only see the job. So, which means that the job essentially is more important than the name for that, according to that dating site. And if you lose your job, if you say unemployed,
well, who is going to date you, right? And another thing that's coming up probably in the short term, if we have AI that is very powerful. So, as you can see, as the intelligence goes up,
so goes the severity of the problems. If we have more intelligent AI, so, as I said, the criticality increases, because the, I was just saying, the more power, the more... Damn it. Yes, thanks.
The greater the power, with great power comes great responsibility. That's what I wanted to say. Thanks. And so, the same happens with AI. AI gets more powerful, and, yeah, the worst-case scenarios get more problematic. Any idea when the first lethal autonomous weapon came up?
So, a weapon that can decide on its own who it's going to kill, without a human interfering. Five years ago, who thinks five years ago? Ten years ago, 20 years ago, 100 years ago, 200 years ago. Actually, it was very early,
because it's the landmine. The landmine, I mean, it's not, you know, it's not intelligent in any sense, but it decides who it's going to kill. Like, if you step on it, you're dead, essentially. But if you're a friend, if you're the wrong person and you step on it, you're still dead.
So, it decides on its own who it's going to kill. And we're going to see more of that, only with better distinctiveness, so with better decisions. Like, we already have drones that do go through the air
and have a look at what's happening at the ground. Then we have AI systems that support, that just bring stuff. But we also have systems that are armed, that can shoot at people. And we're going to see more of that. And even in different dimensions that we're used to.
So, this is a project that's happening in Israel right now. They create a robot that's essentially flying and has a poison injection with it. So, it flies to wherever it thinks and can inject some amount of deadly poison that usually kills humans.
And you can't, as I said earlier, you can't shoot that. You can't wear a kevlar vest against that, right? So, there's no way you can defend against that. This video was created before the UN convention. And what it shows is a drone that's flying
and that targets, in that case, a demo crash dummy, so to say. And it has three grams of explosive. So, it doesn't even have a bullet or something.
It's just the explosive and it targets, in that case, the crash test dummy on its own. So, you can say, you can give it, and this hasn't been built on a large scale. So, this is no weapon that you can buy. But you can see that all we need is there. We have the drones, we have the cameras, we have the face recognition, we have the explosives.
So, anybody who's good enough can build that. And you can't build that en masse. You can't build that for like $10 and tell it to kill everybody with a beard, kill anybody who's black, kill anybody who's female, kill anybody, you know, who you don't like.
So, you can program these things. And this is actually a very real thing because right now in America, thanks, right now in America, you have somebody sitting on the drone to actually enable the kill, so to say. So, there's still a human in the loop. But we have 1,500 people per month
being killed in March 2017. So, this is a substantial amount of people. And this was in Syria, and USA is not even in war with Syria, right? They just kill these people because they think these are terrorists
or potential terrorists. And, you know, if you think about that and combine it, then it's a terrifying outlook. Then you can enable AI-based surveillance. So, this is from the movie Batman, if anybody saw it. But this is already happening large-scale in China.
And even the thing that this is going to... So, you get the rating, and that rating has impact on your daily life. How you can commute, which tram you can take, you get surveilled, and a lot of data is collected about you.
And another thing is that AI can influence. So, if you talk to 30 people, and 30 people tell you the same thing, for instance, that, you know, should, whatever, vote for Donald Trump, for instance, then you're likely going to be influenced. Not everybody, but some percentage of people is going to be influenced.
We are hurt animals, so to say. So, if a lot of your peers tell you that they like a certain thing or dislike a certain thing, it's going to influence you. And if those peers are actually virtual and AI, and not real, and maybe you don't know, because you're chatting, you're only, you know, Skyping, chatting,
in forums or on Facebook or whatever, you're going to be influenced, still. And as we saw, this has happened in the US already. So, this is not fiction anymore. This probably has happened already. Which leads to Vladimir Putin saying, whoever becomes the leader in AI will become the ruler in the world.
So, this is something that guy has said, and it's maybe frightening. And you can have AI-based cyber war, which means that... So, if you have an AI that is intelligent enough, then it can hack into whatever system it wants.
So, the biggest bank rate that's been undertaken wasn't undertaken in the real world, but was undertaken virtually. So, the Bank of Bangladesh got 80 million dollars
that were taken from their virtual systems by hackers. And if you think that AI systems could do that much better than humans, then, you know, no systems secure anymore. Okay, I just talked a lot about the bad things that could happen.
And I don't, like, we have a startup, we create AI to test software. So, I'm not against AI. I just think there's a lot of good that can come from AI, but the good we just accept, right? We're grateful, but we don't need to warn that soon we're going to have such and such.
We need to warn, we need to be aware of the bad things that could potentially happen. This is why I give this talk. But there are also a lot of good things, which I want to come to. So, 99% of every species on Earth is now extinct. And maybe we're going to be extinct at some time as well.
And the problem is, with AI, there's no opt-out. So, the thing is, if you, as Germany, say, we don't want to take part in AI because it's so dangerous, well, the other people are going to do it anyway. Russia, China, US, they're going to do it and create AI.
Yeah, but as I said, there are also benefits. For instance, higher productivity, fewer human errors. As I said earlier, with the cars, it's probably a good idea to let AI drive cars because we will have less casualties. And we get better decisions overall.
We just have potentially individual bad decisions, but overall, we probably get better decisions. We could get, so this is a potential, we could get better societies, we solve war, we could end hunger, we could end diseases with AI. And essentially, we could get immortality and space explorations,
like in the movies, like, you know, total science fiction. This is possible. AI could give us that. And we also need that. Because, for instance, in Nigeria, the World Economic Forum said that with today's systems, like what is in place today,
to create the amount of doctors to reach, like, European level, would take 300 years. So, with existing systems, without creating new schools and without creating new universities, it would take 300 years to get all of those doctors. With AI, we could have that in five years or in ten.
You can't have an AI that makes a diagnosis and says, okay, this guy has whatever disease he has, right? And just says, okay, he needs that medicine. Possibly. So, yeah, as I said, we also could get to immortality.
And for those of you who think it will happen in 50 years or in 100 years and it doesn't matter to you, things are changing. And fast. Don't believe me? Well, in 1997, you said that you shouldn't get into strangers' cars and you shouldn't meet people from the internet.
Today, we summon strangers from the internet to get into their cars. My mom told me to not sit too close in front of the TV because it's bad for my eyes. Well, today,
you probably know all about that graph, which is that, you know, we have an exponential curve. So, it's doubling essentially every time or every year. And we are badly wired to understand what that means. So, if you compare the calculation power per second
to the Lake Michigan, then this is what is going to happen. You know, you have that amount of calculations and you don't even see it on the bottom, right? You can't recognise it. And at the end, it fills very fast.
So, let me give you just that again. So, this is what exponential growth means. You don't recognise it until it's too late. So, if you say that by 2025,
we have AI to be as good, as intelligent as humans, and you could put that anywhere. Like, if you say it's 2050, I don't care. So, I don't care about the exact numbers. I just want to give you an impression of what that means. Then, one year earlier, it's only half the power. And two years earlier, it's one-third the power. And three years earlier, it's one-eighth the power.
And today, it's one-hundred-twenty-eighth the power, which, you know, could already be the case. I don't know. Which means, on the human level intelligence station, when we say AI is arriving, well, it won't stay there. As soon as it's as intelligent as we,
the next second, it's more intelligent. Okay, and if it comes, like, if everything works out in the best possible way, if we have an AI that is as intelligent and still does what we want, what would we want?
Like, we are immortal, we don't have any diseases, we don't have any war, we don't have, like, we don't need to work. What are you going to do with your life? Then, AI has to give us the answer, right? Also. Okay, so this has essentially been my talk.
Now, what can you do, or why do I give the talk? Because what we already saw in the init is that you need to cultivate suspicion. So, if you have an algorithm that tells you something, who to mate, where to employ, where to whatever, whatever to do, that you should not blindly trust the algorithm.
Okay, even if it's an algorithm, it's still going to have bias, it's still going to have errors, scrutinize. And, yeah, just because we don't understand which biases are in the system doesn't mean that there are none. Ethics, essentially, is very complicated.
Human ethics, as we understand them, is very, very complicated. And AI does not give us a get-of-ethics-free card. Okay, we still need to solve the hard problems, ourselves. AI doesn't do that for us. So, if you think this is essentially a good idea to tell others,
then you can, so these are some institutions that are working on those hard problems with AI, and you can raise awareness and you can contribute. Now, for me, as I already said, we are a startup. We use AI to test software. So, if you happen to have software of your own,
you may want to check us out, because we use AI to find problems in your code. If you want to dive deeper into that, there's a blog that's called Wait But Why, and it's very much recommended. So, it has very many interesting...
Many of the pictures that you saw were from that blog, actually. So, I would recommend reading that, but it's very long, so it takes time. There are some movies. Animatrix, for those of you who know matrix. So, Animatrix is the movie that explains how it came to be, what happened, such that we had the matrix.
And Transcendence, by Johnny Depp. And if you like to read, then there's Origin by Dane Brown, or if you like it a bit funny, there's The Quality Land from Mark Overcling. Okay, that's essentially my talk. Thanks for coming, thanks for listening. Please give me a good grade if you liked it.
Now, are there any questions? There's another talk of us who guide all of this talk. What is AI based on? And he said, okay, when we talk about AI,
it's always based on neural neural networks. Is this true? Is it more statistics, or hand-crafted algorithms, or what do you really call AI today? So, first of all... So, the question was, what is AI today based on?
And most of... So, there are different possibilities, but most of what we call AI today is actual neural networks. But not all of it. So, there are, for instance, genetic algorithms, and there are symbolic AI, so problem solvers, where you hand-craft the AI, as you just said.
But most of what we use today is neural networks, which means that this is statistics, because neural networks is essentially statistics, just in a way that humans don't understand it anymore. Yes? I'd like to ask you, so it's a bit of a demonstration,
but I'd like to ask you, if you're a human, is there any algorithm that you'd like to implement? So, the question was, should we regulate whether algorithms are released on humankind, so to say. And funny enough, the DSGFO,
DSGVO, that's the name of the algorithm, already made some steps in that regard. So, I don't know if you read the actual text, but it says that you can request that a human looks at your case. So, there are already steps on the way. So, you can, if, for instance, you are whatever,
you can request that a human takes a look at your specific case. In the examples with the loan, you could say, okay, I don't trust the AI, I want a human to do that. And, yeah, you can request that. But for many companies, you don't even know where they use AI in the background. And also, there's much to be done still.
Like, you know, we want more research, we want more people that are aware of the problem, that don't trust blindly, stuff like that. So, there's still much to be done. Yes?
So, the question being, is this teached properly with the implications on the universities? Anybody want to answer that? Okay, so, I think...
So, in my experience, it's not. So, all those implications for, like, the big implications for humanity and for society, as far as I know, are not taught and talked about, which is also a good idea to bring that to universities. Yes?
Now, I would subsidize with machinery. I mean, this is really different than... Most of all, from my impression, it focuses completely on the left side of the brain. It's always analytical, speaking, chess playing,
but intelligence is very diverse. It's also on the right side of the brain, especially the intellectual, between left and right side of the brain. Are you aware of any definition of intelligence that takes that into account?
So, am I aware of a definition that takes the left side of the brain, so to say, the right side of the brain, into account? So, I'm not aware of a scientific definition, but especially... So, another thing is that the field has been broadened,
and for psychology, for instance, they recognize that intelligence as a term doesn't cut it, and there are, like, emotional intelligence and stuff like that, so, subtleties. But this is mostly, as far as I know, disregarded by the computer science guys.
So, no, I'm not... I mean, yes, in a way, I'm aware, for instance, of emotional intelligence, but I'm not aware of systems that take that into account specifically. So, people try to come up with systems that act human-like. I don't know if you saw the Google Duplex presentation,
so if not, you should watch it, it's scary. There's a machine talking on the phone to people, and it's making human noises, like, it pauses and stuff,
so it really tries to get across that it's human. As I said earlier, we also have that in chatbots, because a chatbot is so fast, it could instantly answer. But people that recognized that a chatbot that gives a direct answer doesn't appear to be human. So, what they did is they programmed the chatbot to wait and then answer,
so that people don't instantly realize that it's a chatbot. So, it's taken into account in some ways, but not specifically, as far as I know. Welcome. Yes?
So, there are all these questions, and I'm used to each and every technology that I think of, and it will, for sure, be the A also. So, what can we do in the sense that, okay, of course, I will scrutinize each and every algorithm that we come up with, but in a global picture, irrespective of each and every country,
is there any organization that we should focus on, or maybe the humans should build an organization which has an irrespective notion of in which way should the bots be built, or this should not be allowed? Okay, this cannot actually happen in a place where too much labor comes into play.
So, the question was whether there is a national or global institution that regulates such things, and that people can talk to, and as far as I know, there's not. So, there's AI, OpenAI, which has been founded by, among others, by Elon Musk, and this is concerned with that, and there are some researchers that try to come up with,
like a European institution, like CERN, for research on subatomic particles and physics. They want to create something European-wise to research AI, but as far as I know, it hasn't happened yet. So, the answer is no. So, you have to go to one of these.
Okay, any more questions? Okay, so, thank you for your time, thank you for listening, I hope it was interesting, and yeah, hope to see you again.