We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

AI: from Aristotle to deep learning machines

00:00

Formal Metadata

Title
AI: from Aristotle to deep learning machines
Title of Series
Number of Parts
4
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The talk presents briefly the main principles used in AI, from Aristotle’s true/false logic, through fuzzy logic, evolutionary computation and neural networks, to arrive at the current state-of-the-art in AI – the deep learning machines. One particular such machine, developed in the presenter’s KEDRI Institute and dubbed NeuCube, is designed for deep learning of complex data patterns so as to predict future events. It uses the latest AI technique called spiking neural networks (SNN) that mimics the learning capabilities of the human brain. NeuCube has already demonstrated its usefulness when dealing with Big Data such as brain EEG and fMRI data; brain-computer interfaces; seismic data; and environmental data for stroke prediction. This is the beginning of understanding complex patterns of changes of variables in space and time and their relevance to future events. It will have a significant impact on our understanding of the dynamics of the micro and the macro worlds with particular application in medicine.
Goodness of fitCorrespondence (mathematics)Student's t-testOpen sourceComputer configurationUniverse (mathematics)Dean numberSeries (mathematics)Associative propertySoftware developerComputer scienceAreaProcess (computing)Maschinelle ÜbersetzungDependent and independent variablesPhysical systemRoutingInsertion lossOpen setSemiconductor memoryArithmetic progressionArtificial neural networkSoftwareDisk read-and-write headLevel (video gaming)RootLecture/Conference
Knowledge engineeringMathematicsAutomorphismArtificial neural networkVirtual machineData miningSpeech synthesisCombinational logicNeuroinformatikKnowledge engineeringImage processingRow (database)QuicksortComputational intelligenceCausalitySymbol tableEvoluteSeries (mathematics)Surface of revolutionBitMedical imagingGoodness of fitMachine visionVideo gameComputer scienceInformationPhysical systemMereologySoftware developerComputer animationLecture/Conference
MereologyInformationAreaSystem programmingForced inductionTelecommunicationArtificial neural networkNatural languageInformationMereologyPhysical systemAreaOcean currentBitProcess (computing)Computational intelligenceEvoluteNetwork topologyProduct (business)Computer animationLecture/Conference
Computing platformTime evolutionComputing platformBitCartesian coordinate systemView (database)EvoluteMetreCASE <Informatik>FreezingPlotterComputer animationLecture/Conference
James Waddell Alexander IIObservational studyRule of inferencePhysicsTheoryFormal grammarLogicPhysical lawUniverse (mathematics)InferenceTime evolutionMultiplication signInferenceJames Waddell Alexander IIPhysical lawObservational studyArithmetic progressionRule of inferenceState observerUniverse (mathematics)Position operatorLogicStatement (computer science)Well-formed formulaDemosceneCursor (computers)Mortality rateFreezingCoefficient of determinationInsertion lossTheory of relativityEndliche ModelltheoriePhysicalismComputer animation
Type theorySystem programmingRepresentation (politics)LogicBoolean algebraBoole, GeorgePredicate (grammar)Rule of inferenceFinitary relationThomas BayesWell-formed formulaGenetic programmingInductive reasoningPhysical systemPropositional calculusFormal languageLogicRule of inferenceRadical (chemistry)Set (mathematics)Computational intelligenceSoftware developerBoolean algebraModel theoryCross-correlationPropositional formulaBasis <Mathematik>Expert systemLogic programmingLecture/ConferenceMeeting/InterviewComputer animation
Fuzzy logicInformationFuzzy logicRule of inferencePropositional formulaFunction (mathematics)Element (mathematics)EmailHash functionSystem programmingInferenceoutputFuzzy logicElement (mathematics)Functional (mathematics)Propositional formulaVariable (mathematics)Fuzzy logicLogicMultiplication signTerm (mathematics)Degree (graph theory)Structural loadInstance (computer science)Artificial neural networkSoftware developerRule of inferencePoint (geometry)Lecture/ConferenceComputer animation
Machine learningTuring testDiagramGame theoryComputerSoftware testingEquivalence relationTuring testArtificial neural networkSoftware testingMachine learningVirtual machineState observerIdentical particlesWebsite40 (number)Lecture/ConferenceComputer animation
Time evolutionVirtual machineProduct (business)InformationProcess (computing)Regulärer Ausdruck <Textverarbeitung>Term (mathematics)Vector potentialMembrane keyboardWeightAtomic nucleusScale (map)Temporal logicType theoryRead-only memoryPhase transitionFrequencyMultiplication signVirtual machineGroup actionProcess (computing)Machine learningPhysical systemEvoluteInformationProduct (business)Direction (geometry)Scaling (geometry)Semiconductor memoryDifferent (Kate Ryan album)Source codeTerm (mathematics)Natural language2 (number)Connected spaceType theoryLecture/ConferenceComputer animation
InformationProcess (computing)Product (business)Time evolutionVirtual machineFrequencyPhase transitionRegulärer Ausdruck <Textverarbeitung>Vector potentialMembrane keyboardTerm (mathematics)WeightAtomic nucleusScale (map)Temporal logicRead-only memoryType theoryArtificial neural networkComputational intelligenceEndliche ModelltheorieFunction (mathematics)Adaptive behaviorPhysical systemProcess (computing)Virtual machineInformation10 (number)outputFunction (mathematics)Atomic nucleusFrequencyWeightGraph coloring40 (number)Connected spaceDemosceneArtificial neural networkScaling (geometry)PerceptronBlack boxFunctional (mathematics)Bit rateLecture/ConferenceComputer animation
outputFunction (mathematics)Variable (mathematics)Fuzzy logicRule of inferenceArtificial neural networkVertex (graph theory)System programmingProgrammable logic arrayCompilation albumECosDecision support systemComputer multitaskingCAN busBlack boxSoftware developerArtificial neural networkTerm (mathematics)1 (number)WeightUniform boundedness principleFunction (mathematics)Multiplication signRule of inferencePhysical systemoutputFunctional (mathematics)Gene clusterDifferent (Kate Ryan album)CodecCombinational logicPerformance appraisalMathematical analysisElement (mathematics)Standard deviationFuzzy logicValuation (algebra)Set (mathematics)Right angleSocial classInsertion lossLecture/ConferenceComputer animation
Computer multitaskingCompilation albumRule of inferenceProgrammable logic arrayFuzzy logicDecision support systemFuzzy logicoutputNetwork topologySoftware developerFunctional (mathematics)Artificial neural networkPhysical systemGroup actionMultiplication signRule of inferenceMathematicsKnowledge extractionLecture/ConferenceComputer animation
Artificial neural networkComputer multitaskingRoboticsConvolutionLevel (video gaming)Physical systemPattern recognitionSheaf (mathematics)Artificial neural networkMathematical analysisTrailSpeech synthesisComputer animationProgram flowchart
Artificial neural networkEvent horizonTemporal logicVector spaceData structureData modelImage processingCognitionContrast (vision)Pattern recognitionConvolutionMachine visionField (computer science)Sheaf (mathematics)Maxima and minimaArtificial neural networkMathematical analysisCopyright infringementGoodness of fitAreaGroup actionData structureFrame problemComputational intelligenceMultiplication signEndliche ModelltheorieMusical ensemblePattern recognitionDifferent (Kate Ryan album)Contrast (vision)Machine visionEvent horizonLecture/ConferenceComputer animation
Physical systemMagnetic stripe cardArtificial neural networkMoment (mathematics)Term (mathematics)NumberElectric generatorPresentation of a groupPattern languageLecture/Conference
Data modelProcess (computing)InformationArtificial neural networkFood energyWave packetSynchronizationFunction (mathematics)DisintegrationNatural languageFrequencySpacetimeBinary fileTemporal logicComputer wormExecution unitCircleSoftwareInformationBinary codeMultiplication signPhysical systemProcess (computing)Artificial neural networkElement (mathematics)Computational intelligenceFundamental theorem of algebraPresentation of a groupThresholding (image processing)MereologyScaling (geometry)Medical imagingLimit (category theory)Food energyPattern languageSpacetimeElectric generatorScalabilityMembrane keyboardoutputSoftware developerRepresentation (politics)Computer animation
DisintegrationSynchronizationArtificial neural networkComputer wormProcess (computing)Temporal logicFunction (mathematics)Natural languageData modelSpacetimeInformationWave packetFood energyMeromorphe FunktionCognitionPhysical systemVirtual machineConvex hullMenu (computing)Software developerModal logicVirtual machineSpacetimePlanck constantBulletin board systemComputer architecturePattern languageMultiplication signComputing platformSimilarity (geometry)Different (Kate Ryan album)AreaScaling (geometry)Lecture/ConferenceComputer animation
Wave packetArchitectureRandom numberSample (statistics)State diagramTexture mappingTransmitterSpacetimeConnected spaceMultiplication signAssociative propertyTerm (mathematics)outputScalable Coherent InterfaceRational numberLimit (category theory)Pattern languageVirtual machinePhysical systemArtificial neural networkService (economics)Software developerSoftwareForm (programming)Computer animationEngineering drawingLecture/Conference
Temporal logicIntegrated development environmentTime evolutionParameter (computer programming)SpeciesInformationCharge carrierStability theorySurvival analysisSystem programmingChromosomal crossoverMathematical optimizationMachine learningBenz planeVirtual realityArtificial neural networkVirtual machineEndliche ModelltheorieSpacetimeDatabaseCurveConnected spaceVideo gameEvoluteNatural numberComputer animation
Time evolutionParameter (computer programming)SpeciesInformationCharge carrierStability theorySurvival analysisSystem programmingChromosomal crossoverMathematical optimizationMachine learningComputing platformComputerArchitectureNeumann boundary conditionFluid staticsControl flowRead-only memoryComputational intelligenceQuantumEndliche ModelltheorieExecution unitArithmetic logic unitMeromorphe FunktionData structureArtificial neural networkSimilarity (geometry)Computer programSuperposition principleComputerGraphics processing unitAreaEvoluteSpeciesElectric generatorComputational intelligenceArtificial neural networkNatural languageChromosomal crossoverInformationStability theoryComputing platformPhysical systemSoftware developerParameter (computer programming)Charge carrierProgramming paradigmMathematical optimizationComputer hardwareLecture/ConferenceComputer animation
ComputerComputing platformTime evolutionArchitectureNeumann boundary conditionControl flowFluid staticsRead-only memoryQuantumEndliche ModelltheorieExecution unitArithmetic logic unitInformationMeromorphe FunktionCompilation albumData structureArtificial neural networkSimilarity (geometry)Computer programSuperposition principleComputerFlow separationExecution unitTexture mappingSemiconductor memoryComputer architectureLaptopGame controllerComputational intelligenceSoftware developerProduct (business)Data structureComputer programmingExterior algebraGraphics processing unitArtificial neural networkComputing platformComputerProcess (computing)Workstation <Musikinstrument>Point cloudRule of inferenceLecture/ConferenceComputer animation
Time evolutionComputing platformComputerNeumann boundary conditionArchitectureControl flowFluid staticsQuantumEndliche ModelltheorieRead-only memoryExecution unitArithmetic logic unitInformationMeromorphe FunktionData structureArtificial neural networkSimilarity (geometry)Computer programSuperposition principleComputerComputer architectureQuantumQubitType theoryBitPresentation of a groupSuperposition principleRepresentation (politics)Computational intelligenceFluid staticsMultiplication signState of matterForm (programming)Endliche ModelltheorieCartesian coordinate systemCloud computingPhysical systemBasis <Mathematik>Lecture/ConferenceComputer animation
EmailData storage deviceQuery languageFunction (mathematics)Point cloudSystem programmingFacebookGoogolService (economics)Price indexSoftware developerProcess (computing)Data analysisComputing platformVirtual machineMaxima and minimaTotal S.A.Computing platformPoint cloudSet (mathematics)Physical systemService (economics)Function (mathematics)Cloud computingCartesian coordinate systemData analysisComputational intelligenceComputer clusterGoogolCurvatureFacebook
Digital signalHybrid computerGamma functionWhiteboardAnalogyComputer hardwareData modelArtificial neural networkField programmable gate arraySystem programmingMeromorphe FunktionMultiplicationScalabilityPhysical systemModul <Datentyp>Endliche ModelltheorieSpectrum (functional analysis)Core dumpComputing platformSimulationScale (map)Numbering schemeQuantum entanglementQuantum information scienceSuperposition principleParallel computingPlanck constantQuantumVector graphicsAlgorithmTime evolutionKolmogorov complexityPolynomialLatent heatAssociative propertySemiconductor memoryMathematical optimizationCryptographyNumberData analysisComputerComputational intelligencePhysical systemForestTelecommunicationEndliche ModelltheorieAlgorithmQuantumSoftware developerMathematical optimizationComputer hardwarePhysicalismVideo gameComputing platformQuantum mechanicsMoment (mathematics)Presentation of a groupQuantum computerVirtual machineArtificial neural networkForcing (mathematics)PlanningService (economics)Universe (mathematics)Group actionArithmetic meanLevel (video gaming)PlastikkarteMereologyLecture/ConferenceComputer animation
Quantum entanglementParallel computingQuantum information scienceSuperposition principlePlanck constantQuantumVector graphicsAlgorithmTime evolutionArtificial neural networkLatent heatKolmogorov complexityPolynomialSemiconductor memoryAssociative propertyMathematical optimizationComputing platformCryptographyNumberQuadrilateralUser interfaceDataflowVirtual machineBeta functionSoftware development kitEmailVenn diagramCNNGamma functionSurjective functionLogic gateSuperposition principleQuantumPrincipal idealVideo gameParallel portVector spaceQuantum entanglementComputing platformCartesian coordinate systemLibrary (computing)AreaProjective planeObject (grammar)Motion capturePhysical systemNatural languageVirtual machineSoftware developerProcess (computing)Open sourceDiagramArtificial neural networkFunctional (mathematics)InformationMachine learningElectronic mailing listStack (abstract data type)Enterprise architectureNatural numberLecture/ConferenceComputer animation
Rule of inferenceState diagramTexture mappingVariable (mathematics)Wave packetCodierung <Programmierung>Projective planeUniverse (mathematics)Cartesian coordinate systemVirtual machineProcess (computing)Artificial neural networkInformationComplex (psychology)Flow separationMereologyEndliche ModelltheorieLecture/ConferenceComputer animation
MeasurementState diagramWave packetCodierung <Programmierung>Rule of inferenceTexture mappingVariable (mathematics)Connectivity (graph theory)Data modelTemporal logicFunctional (mathematics)CognitionFunction (mathematics)Task (computing)Observational studyArithmetic progressionProcess modelingPredictionDigital signal processingPhysical systemPointer (computer programming)Process (computing)Proof theoryMultiplication signArtificial neural networkEndliche ModelltheorieSoftware developerTerm (mathematics)Order (biology)Cartesian coordinate systemResonatorFinite-state machineMoment (mathematics)MultiplicationCondition numberPattern languageArithmetic progressionMedical imagingCognitionComputer simulationPhysical system2 (number)PredictabilityState of matterFunctional (mathematics)Connected spaceLecture/ConferenceMeeting/InterviewComputer animation
Arithmetic progressionProcess modelingPredictionPointer (computer programming)ComputerInterface (computing)ComputerInterface (computing)Device driverState of matter2 (number)ComputerAreaUniverse (mathematics)Computational intelligenceMeasurementInterface (computing)PeripheralCategory of beingVirtual realityBit rateInterface (computing)Source codeTelecommunicationGame controllerControl flowLecture/ConferenceComputer animation
Control flowRobotTaylor seriesVirtual realityIntegrated development environmentQuadrilateralPrototypeInteractive televisionCognitionGame theoryTask (computing)Regulärer Ausdruck <Textverarbeitung>Software development kitSign (mathematics)IRIS-TGame controllerCartesian coordinate systemInterface (computing)RoboticsVirtual realityVirtualizationPhysical system40 (number)MereologyComputer animation
Task (computing)Regulärer Ausdruck <Textverarbeitung>Software development kitVirtual machineDependent and independent variablesInterface (computing)Affective ComputingType theoryPoint (geometry)MereologyComputerComputational intelligenceExpressionEuler anglesPressureDependent and independent variablesComputer configurationDecision theoryState of matterCase moddingVirtual machineLecture/ConferenceComputer animation
outputState diagramProcess modelingModule (mathematics)Artificial neural networkVariable (mathematics)Event horizonMobile WebPredictionInformationExecution unitFunctional (mathematics)Well-formed formulaEndliche ModelltheorieAreaDecision theoryMultiplication signEvent horizonPredictabilityIntegrated development environmentNeighbourhood (graph theory)Portable communications deviceLecture/ConferenceMeeting/InterviewComputer animation
Artificial neural networkMaxima and minimaDecision theoryFingerprintEndliche ModelltheorieDecision theoryStudent's t-testMeasurementMultiplication signObject (grammar)AreaHypermediaDifferent (Kate Ryan album)Video gameLecture/ConferenceComputer animation
Mathematical analysisSource codeOntologyDecision support systemComputational intelligenceProcess modelingArtificial neural networkPattern languageMachine visionRepresentation (politics)Object (grammar)Data transmissionData modelInformationProcess (computing)Pattern recognitionSimulationSystem programmingVisual systemAreaPhysical systemProteinPattern languageGoodness of fitComputational intelligenceNatural languageInformationEndliche ModelltheorieArtificial neural networkBoom (sailing)Autonomic computingCybersexProcess (computing)MereologyVirtual machineCartesian coordinate systemAudiovisualisierungObjekterkennungPlanningPerspective (visual)Projective planeVisualization (computer graphics)Pattern recognitionLecture/ConferenceComputer animation
RobotInformationAreaRoboticsPerspective (visual)View (database)Term (mathematics)Pattern recognitionSource codeDecision theoryService (economics)Physical systemAutonomic computingData conversionGroup actionDevice driverAutonomous system (mathematics)Medical imagingAlgorithmLecture/ConferenceMeeting/Interview
Demo (music)Device driverFormal languagePersonal digital assistantNumeral (linguistics)Data conversionDevice driverService (economics)Physical systemGroup actionLevel (video gaming)Arithmetic progressionComputer animationLecture/Conference
Sample (statistics)GoogolIntelInternetworkingRobotSystem programmingSoftwareSupercomputerPattern languageArtificial neural networkImplementationSpeciesPhase transitionCodierung <Programmierung>PredictionEvent horizonProcess modelingMultiplicationCubeCircleLine (geometry)Physical systemAutomationRoboticsArithmetic progressionAreaAutonomic computingEvent horizonSpeciesGroup actionMoment (mathematics)InternetworkingProduct (business)Covering spacePoint (geometry)Lecture/ConferenceComputer animation
TelecommunicationProcess modelingFood energyPredictionVolumeMobile WebMeasurementSupport vector machineLabour Party (Malta)PredictabilityVolume (thermodynamics)Data modelPhysical systemFood energyTelecommunicationEmailLength of stayProduct (business)Goodness of fitGroup actionMoment (mathematics)TunisWebsiteMathematicsArithmetic progressionConnected spaceRight angleComputer animationLecture/Conference
RoboticsRule of inferenceSemiconductor memoryAssociative propertySystem programmingComputerInterface (computing)Systems engineeringFuzzy logicArtificial neural networkSoftwareMachine learningMachine visionNatural numberFormal languageProcess (computing)AutomorphismDistribution (mathematics)InformationFuzzy logicKnowledge engineeringRobotReal numberSoftware developerSoftwareArtificial neural networkVirtual machineWhiteboardLimit (category theory)Interface (computing)Computer animationComputational intelligenceInformationAreaNatural languageProcess (computing)Machine visionSpacetimeLecture/ConferenceMeeting/InterviewComputer animation
Transportation theory (mathematics)Digital signalAutomorphismInternetworkingProcess modelingBeta functionMedical imagingInternet forumNeuroinformatikPredictionVolumeSystem programmingIntegrated development environmentHazard (2005 film)SpeciesDecision theoryMoment (mathematics)AreaOperator (mathematics)Cartesian coordinate systemPhysical lawElectronic mailing listArtificial neural networkProjective planeInteractive televisionInternet forumProduct (business)Latent heatDifferent (Kate Ryan album)Lecture/Conference
Point cloudMUDCloud computingComputing platformPhysical systemIntelComputer wormArtificial neural networkMathematical singularityComputerBit rateTime evolutionTask (computing)Inheritance (object-oriented programming)RobotForm (programming)Bit rateEvoluteLatent heatForm (programming)Task (computing)RoboticsProduct (business)Virtual machineSoftware developerDifferent (Kate Ryan album)AreaProjective planeDomain nameComputational intelligenceArithmetic progressionComputer animation
Natural numberArtificial neural networkNumberOnline helpView (database)Context awarenessMultiplication signBookmark (World Wide Web)Musical ensembleLecture/Conference
Flynn's taxonomyMathematical analysisAddress spaceWaveAlpha (investment)Musical ensembleMultiplication signObservational studyGreatest elementRight angleArtificial neural networkComputer animationLecture/Conference
Knowledge engineeringExecution unitACIDVideo gameFamilyMultiplication signWebsiteNumberDependent and independent variablesCASE <Informatik>Knowledge baseSeries (mathematics)Natural numberGoodness of fitMereologyEmailDigital photographyAddress spaceObject (grammar)Presentation of a groupQuicksortMathematicsRussell, BertrandExtension (kinesiology)Limit (category theory)Virtual machineAdaptive behaviorView (database)Computer animationLecture/Conference
Artificial neural networkKnowledge engineeringFuzzy logicSystem programmingArchitecturePhysical systemComputerNumberSpectrum (functional analysis)BuildingProcess modelingComputational intelligenceDigital electronicsPulsverarbeitendes neuronales NetzTexture mappingTemporal logicPredictionEvent horizonData modelScalabilityCubeProcess (computing)Virtual machineProduct (business)Computer chessArtificial neural networkProcess (computing)Office suiteLevel (video gaming)PlanningVideo gameGame theoryDifferent (Kate Ryan album)Computational intelligenceMetropolitan area networkRoboticsMultiplication signSoftware developerDevice driverControl flowSingle-precision floating-point formatArithmetic meanGodPhysical systemElement (mathematics)Dependent and independent variablesPlastikkartePoint (geometry)PredictabilityMathematical singularityTask (computing)State of matterLecture/Conference
Transcript: English(auto-generated)
Good evening. Kia ora tatau. Welcome to the first of this year's Gibbons Lectures. My name's John Hosking. I'm the Dean of Science here at the University of Auckland. It's my very great pleasure to welcome you here this evening.
The Gibbons Lectures are an annual series of talks hosted by the Department of Computer Science in association with ITPNZ. The goal of each lecture is to describe detailed developments in a particular research area to a general but technical audience, from computer science students at all levels to IT practitioners in other departments and outside the university.
The Gibbons Lectures are named in memory of Associate Professor Peter Gibbons, a former head of the Computer Science Department and a very good friend and mentor to many of us who hail from that department. And in doing so, I'd like to acknowledge Peter's sister Sally, who's here this evening. So welcome, Sally.
This year's Gibbons Lectures are on artificial intelligence and its impact. This, of course, is a topic which has been receiving a lot of attention in the popular press, with remarkable successes of software such as machine translation systems, driverless cars, voice response systems,
as well as corresponding concerns over job losses through automation. Our lead speaker for this year is Professor Nick Kasseboff from Auckland University of Technology. He will discuss the research progress of AI from its deepest roots to the current frontier, applying AI to the big data of medicine.
Nick is Director of the Kedri Research Institute at Auckland University of Technology, originally from Bulgaria. Nick has a PhD from the University of Sofia,
has worked at the University of Essex, University of Otago, and since 2002 at AUT. He has what I can only regard as a phenomenal publication history with over 600 works. As you can see from the title up there, he's a collector of prestigious fellowships.
He's a fellow of the IEEE, the IITP, and the Royal Society of New Zealand, of course. He has research interests in neurocomputation, artificial intelligence, machine learning, data mining and knowledge engineering, neuroinformatics, bioinformatics, signal, speech and image processing,
and this combination makes him eminently qualified to talk to us about the topic in hand. So, Nick, I now invite you to deliver the first of the Gibbons lectures for 2007 on AI from Aristotle to deep learning machines. Nick.
Good evening, ladies and gentlemen, colleagues and friends. It is my great pleasure to give the first lecture of this series that is organized by the Computer Science Department at the University of Portland and the Institute for IIT Professionals.
Thank you very much for organizing that. It is a very timely series having in mind the AI revolution that is going on in the world. There will be four lectures, and if you expect me to cover all aspects of AI, that is not going to happen.
What is going to happen, I'm going to talk about a little bit of the evolution of the AI methods, and I will be a little bit more technical to explain what is behind this symbol of AI.
Is it something that we should be frightened? If we understand that better, we will be more familiar with the development, and we can actually have more vision about the future of artificial intelligence. What is AI? Well, probably the simplest definition is it is part of interdisciplinary information
sciences area that develops and implements methods and systems that manifest cognitive behavior. Our main features of AI currently, to mention only some of them, are learning, adaptation, generalization, inductive and deductive reasoning, human-like communication, natural language processing, and many more.
Some more features that we will see in the future are consciousness, self-assembly, self -reproduction, AI social networks, and these features are coming now to the current AI systems.
I will talk first about the evolution of the AI methods, and then I will cover a little bit about the computer platforms that the artificial intelligence inspired to build. Then I will talk about applications, about AI in New Zealand.
We have to be aware where we are and what we are going to do in the future. And the future of AI, of course, it is very, very difficult to predict, but I have my view, and other people, other lecturers will have their view, and that is a matter of discussion worldwide.
The evolution of AI methods, many philosophers consider Aristotle to be the originator of deductive reasoning. And Aristotle was a very pronounced philosopher and scientist who was a people of Plato and a teacher of Alexander the Great.
And if I illustrate what deductive reasoning is, I would say this is the example that is in all logic books and AI books. We have deductive reasoning if we have a statement like, all humans are mortal, or this is a rule that says, if human, then mortal.
And we have a new fact, Socratic is a human, and the deduced inference is Socratic is mortal. Well, it is the simplest possible example of deductive reasoning. But Aristotle went further. He introduced epistemology, which is based on the study
of particular phenomena, which leads to the articulation of knowledge, rules, formulas, across sciences. So he worked in botany, zoology, physics, astronomy, chemistry, meteorology, psychology, etc. According to Aristotle, his knowledge was not supposed to change. It became dogma.
In places Aristotle goes too far in deriving general laws of the universe from simple observations and overstretched the reasons and conclusions. Because he was perhaps the philosopher most respected in Europe, European thinkers think and accept his erroneous positions sometimes, such as inferior roles of women.
Which helped back science and social progress for a long time. But the first deductive logic theory inspired the development of the so-called symbolic
AI, where logic rules and deductive reasoning started to appear in 18th, 19th century. Including correlations and implications, propositional logic, Boolean logic, that is the basis of our contemporary computers.
Predicted logic with a language prologue, probabilistic logic, rule-based systems, expert systems. We should say that logic systems and rules are too rigid to represent the uncertainty in the natural phenomena. They are difficult to articulate and not adaptive to change.
So then a step further to account for uncertainties in human-like reasoning was introduced by Lotwizadeh Fazi logic. And Fazi logic deals with so-called Fazi proposition. Here we have a Fazi membership function that represents a variable that is called time.
And the time is represented as short, medium, long, as membership function of Fazi terms. And the propositions could be Fazi washing time is short. And if it's 4.9 minutes washing time, it is short to a degree of 0.8 and medium to a degree of 0.2.
So Fazi rules, like if wash load is small, then washing time is short, can be articulated, can be implemented. And that was actually a very, very, very important development of artificial intelligence in Japan.
Especially with the rice cookers and other Fazi logic devices. However, Fazi rules need to be articulated in the first instance. They need to change, adapt, evolve through learning to reflect the way human knowledge evolves. Further, challenges in the artificial intelligence as a turning point appear.
And one of them was the Turing test for artificial intelligence. The Turing test was initially proposed by Turing as a question, can machines think? And it was a test that he later called the imitation game, where an observer is communicating behind a curtain with a machine and a person.
If the observer cannot distinguish whether the observer communicates with a machine or a person. So that is what artificial intelligence is about. The Turing test has been highly influential and widely criticized.
However, it has become an important concept in the philosophy of artificial intelligence. The test though was too difficult to achieve without machine learning in an adaptive, incremental way. So learning, machine learning is something, learning is something that we humans do all the time, every minute.
And learning from data inspired by the human brain was one of the directions to develop machine learning systems. And the human brain is the most sophisticated product of the evolution as an information processing machine.
Why is that? Well, the human brain consists of billions, 80 billions, 100 billions of neurons, trillions of connections, and it has evolved to millions of years of evolution. And the brain can deal with different memory types, short term in the membrane potential of the neurons, long term synopticoids, genetic.
It deals with different scales of time, nanoseconds, milliseconds, minutes, hours, many years, like the evolution. So what could be the most inspirational source for machine learning than the brain?
And the single neuron, if we look at the single neuron, is a very sophisticated information processing machine. Deals with time, frequency, face information, thousands of genes expressed in the nucleus,
thousands, tens of thousands, 20 thousands inputs to each of the neurons, and just one output. And the question is, can we make artificial intelligence to learn from data like the brain? And that was, this question was addressed early in 1943.
Mark Callahan and Pete, they introduced the so-called artificial neuron with inputs, connection weights that represent synoptic, synoptic weights. They are subject to learning and outputs that are calculated as an output function. And then Rosenblatt introduced the so-called perceptron, the first neural network, very simple one.
And then it was further developed into multilevel perceptrons and large scale neural networks. But the early neural networks were so-called black boxes. And also one strength, difficult to adapt to new data without much forgetting.
And there was quite a lot of thinking about neural networks that they cannot do much and they are black boxes and therefore they are not very useful to use. And that is why a new development of neural networks was done in terms of neural networks that can not only learn from data,
but they can be used to extract rules, to extract patterns, to extract so-called knowledge from these data. And these neural networks, the first ones were called neural fuzzy systems.
So no more the black box curse. Neural networks can be trained, inputs and outputs and neurons here. And after analysis of the neural networks, rules can be extracted that explain the essence in the data. For example, if input one is high and input two is low, then the output is very high.
So using also fuzzy terms, which was the combination between neural networks and fuzzy logic to make neural networks represent more human-like thinking.
So this is one example of using neural networks to extract rules from data that relate to evaluation of renal function. The golden standard is using one function for everybody anywhere in the world at any time. And if we can see, we can train a system that clusters patients' data into different clusters and it extracts the rule for each of these clusters.
For example, this cluster is defined as age about 21 of the people. And this is the membership function, female, creatinine, et cetera, et cetera. And this is the function that was derived for this particular cluster.
Unfortunately, I don't belong to this cluster because my age is not about 21. But I have another cluster here that will tell me what my function, what function can be used to evaluate my renal functions.
And this is something that was also very important development in neural networks. And I should say that 24 centuries after Aristotle, now we have systems, artificial intelligence systems,
that can automate the rule and knowledge extraction from data. It doesn't mean that humans do not have to do anything. No, they have to observe these rules and they have to make sense out of that. But these rules can change. They can vary from group to group and they can be updated all the time with data.
I'm sure Aristotle would have been very happy to see that. Now, deep neural networks are the current development in the neural networks. What is, why deep? Because first of all, they have many layers of neurons connected to each other.
And second, some neurons, they have neurons that can actually look deep in the data and extract features from smaller sections of the data. And this is a multilayer deep network, so-called convolutional.
We have neurons that extract features from the palm and then these features are combined in the next level to recognize the hand. And then, of course, they will be recognizing the leg and the body. And then it is now recognized that this robot is sitting and the robot has yellow eyes and it has big feet.
So this is the approach that is currently used for many better recognition systems. And the so-called convolutional networks indeed do the deep analysis of features of smaller sections.
For example, this neuron can calculate the maximum value within this field of area, so that was six. Deep neural networks is nothing new, of course. It is inspired by the computer brain and computer vision was used as inspiration
to develop the first deep neural networks like cognitron and neo-cognitron by Fukushima. So these neural networks have many layers, layers that capture different features, for example, contrast, edge detection. They are combined, combined, combined in different layers and this layer, so they
correspond to the computer, to the visual cortex until the recognition is done. Well, deep neural networks are excellent for vector, frame-based data, but not much for temporal or spatial spectral temporal data. There is no time of asynchronous events learned in the model.
They are difficult to adapt to new data and the structures are not flexible. If we ask the question, how do we humans learn pieces of music, for example, a performer who plays Mozart makes about 10,000 strikes on the piano within one hour.
Without looking at notes, that's a deep learning patterns that are learned in the human brain. Still, the deep neural networks that are at the moment present cannot do that and they are very limited.
So now we would like to move further to develop systems that are not limited in terms of number of layers. We need to have as deep as needed from data system. So one way to do that is to use the so-called third generation of neural networks and they are the so-called spiky neural networks.
Spiky neural networks are, they represent information as strains of spikes or binary units at a certain time. So time is part of the information representation here.
And this neuron receives many spikes from many inputs and if the membrane potential grows above a certain threshold, this neuron emits a spike. And this is the so-called spiky neural networks. Spiky neural networks have the ability to capture time, to learn time, learn temporal patterns.
And also they are very easy to implement as hardware and easy to implement in software. They are, because they deal with spikes only, which is the binary element, they are very economical in information processing. Well, the question is can we use these neural networks to develop large scale deep learning machines?
And as the IBM fellow Dharmendra Modha, who is the chief scientist of brain inspired computing in IBM research says, the goal of brain inspired computing is to deliver a scalable
neural network substrate while approaching fundamental limits of time, space and energy. And indeed, spiky neurons can deal with spatiotemporal data, they can integrate different modalities and they can deal with time, space, synchronization, they can evolve.
Well, how can we use this phenomena to develop deep learning machines? One example is developed in my lab, a deep learning machine which is called Neutube, or neural tube.
This Neutube, which we have the architecture here, consists of a three dimensional so-called tube that is based on spiky neurons. And this tube is scalable, you can have a tube of 100 neurons or 100 million neurons, still possible to simulate on different platforms.
And this tube can learn patterns like the brain learns deep patterns in different areas of the brain at different time scales. So this deep learning machine have similar learning methods like spike time dependent plasticity.
When the neurons receive inputs they spike and then that causes the creation of connections between the neurons.
And these connections are meaningful because they capture time associations between the spatially distributed input data. That is what the brain does, we don't have any limitation in terms of how deep these patterns are, it could be any deep as needed.
And in my lab we developed a machine that, we call it the machine of course, this is a development system that everybody can use it and download in this software in a simplified form.
They can develop their own artificial intelligence deep machines on their data, visualize that models in a three dimensional virtual reality space to understand what the data is about. Unfortunately we can't do this with the brain, but we can do this with the models that we create to analyze how they learn in an online manner.
Because now it is online learning, there is data coming here and connections are being created. So this is the principle that we use as an example of deep learning machines. Now we know that along with the lifelong learning in our brains there is quite a lot of learning happening in nature as evolution.
Another area of artificial intelligence called evolutionary computation uses some principles of learning from natural evolution.
And this is Charles Darwin, species learn to adapt to genetic evolution, crossover and mutation in populations over generations. Genes are carriers of information stability versus plasticity. Evolutionary computation is also a learning paradigm in artificial intelligence.
And it can be used to optimize parameters genes on learning systems. Now the development of the methods of artificial intelligence triggered development of new computational hardware platforms. And the beginning was the von Neumann computer architecture which uses memory, control unit, arithmetic unit as separate units.
And this architecture is still on our laptops, realized in our laptops and computers. And it is realized also as general purpose computers or specialized fast computers as GPUs and TPUs, tensile product unit.
Or cloud based computing platforms. But the alternative computer architecture that evolved due to the development of the brain like artificial intelligence is called neuromorphic computational architecture.
And neuromorphic computational architecture integrates data programs and computation in one structure. Similar how the brain does it. We don't have a separate memory, we all have the memories and the computation, the rules, the learning together in one brain.
A third type of architecture developed was the so-called quantum inspired architecture using quantum bits. Which are in the quantum bits in superposition of one and zero. So all these architectures, the common thing is that they use binary representation by the bits.
But the bits in the von Neumann architecture are static representation of data. In neuromorphic computation bits are allocated with time. And in the quantum architecture bits are in a superposition of states. AI models can be simulated using any of the architectures if available but with various efficiency.
And if we look at the cloud computing platforms that are massively available now for AI applications. We should say that they make it possible to rapidly build cognitive cloud based exploration application of data. Such systems have been released by competing rivals for world domination.
And this of course we all know about the cloud computing facilities by Google, Facebook, Microsoft, IBM, Bido, Amazon and many more. And this is one example of IBM Watson discovery services that people can download, upload their data and do some pre-processing.
And they can do some modeling, can they get the output delivered. And this is data and here we can have the same with text that could be entered in the system etc. And I should say that the cloud based computer platforms are useful but they are limited, they have limited set of methods.
Mainly for online data analysis and not very suitable for processing streaming data. Neuromorphic hardware systems that were developed to meet the requirements of the brain like computation.
We started with Hochstein-Hertzle model. Carver Mead, the Caltech developed the first electronic circuit that realizes a neuron. The first silicon retina was developed by Misha Mahawat, unfortunately she had a very short life.
At the moment there are quite a lot of competition I would say. Development of neuromorphic hardware from the IBM TrueNorth with one million neurons and one billion of synapses to the Stanford Neurodred and to also the Spinnaker developed in the University of Manchester under Steve Fulber's leadership.
What is the next step in the computer platforms that will support AI? Well maybe quantum inspired computation. Quantum inspired computation doesn't mean quantum computers at this stage.
Because we have quantum principles used in quantum inspired evolutionary algorithms in present neural networks, quantum inspired neural networks and quantum inspired optimization of deep learning machines but they are still in their influences. And this is Ernest Rutherford and many other people actually contributed to the physics of quantum physics.
Some principles are now being used like principles of the superposition and the quantum gates. We use quantum vectors and of course we have some other principles like interference, parallelism and entanglement.
Well that was all about AI methods and AI platforms. Now let us look at the applications. Well we are not supposed, I'm not going to read all this. I just want to say that we have on one hand we have the technology stack and this is actually a diagram that was developed by Bloomberg.
Still probably not quite that needs some update. But we have here the techniques and the platforms that could enable AI applications.
So we have the machine learning, we have the natural language, methods and systems. We have development, we have data capture, we have open source libraries and we can see better here actually the way to develop machine learning. So these are the technological platforms and methods that would enable development of
a lot of machine learning and artificial intelligence applications in the areas of healthcare. I won't read the companies, there are only a short list of companies which deal with that. We talk about industrial applications, agriculture education, we talk about autonomous
systems as vehicles, aerial, we talk about enterprise functions, customer support etc. We talk about visual, audio, sensory and other information processing. And this is only a short list of applications of artificial intelligence,
not to mention radio astronomy and other large projects that are using that. Now I'm going to talk about only some of them that some of them were developed here in New Zealand, some of them were developed in my lab, some of them were developed in other universities in New Zealand and in other places. But I will just select a few of them to talk about these applications.
We'll talk about all of them. Well let's start with AI applications in medicine. Modeling and understanding the brain is a very important part of science and researching artificial intelligence because the brain is a complex information processing machine.
We would like to understand it for several reasons, not only to develop new artificial intelligence but to improve, to protect, to understand our brain for the future. And this is an example of how EEG or fMRI data can be modelled over time and the model can be used to try to understand some processes in these data.
And here we have computational models based on new tube that are trained on fMRI, functional magnetic resonance imaging data. Not only we can train the system to recognize certain patterns of fMRI, to classify it, to predict it, but we can use
it in terms of understanding what the functional connectivity of the model is in order to explain better what the data is about. We can also look at using modeling electroencephalogram data that are collected from the human brain, from the scalp.
And these data could be used for many applications at the moment. I will show only a few of them.
One application which is very, very important at the moment with the aging population is to predict progression of multiple mild cognitive impairment to Alzheimer's disease. So here we have the brain model of a mild cognitive impairment patient and
here we have the brain model of the same patient who developed Alzheimer's disease. And we can see that there is not much happening here under the same conditions. That could be used, these models can be used to predict progression of disease. And here this is in months, but of course we can also use such models to predict states of brain in seconds.
And this is the brain before activities of a driver and this is the same brain before micro-sleep. So we can see that two seconds before micro-sleep the brain shuts up, shuts down.
And then we can recognize by the computer system that is measuring the brain signals whether the person is going to micro-sleep or not. And that is done with the University of Canterbury. Brain-computer interfaces is a fascinating area and brain-computer interfaces are interfaces
that allow humans to communicate directly with computers or external devices to their brains. For example EEG data. And the brain-computer interfaces can be used for paralyzed people to navigate and to move cursors, for people to move wheelchairs or for people to communicate between each other.
They don't have to sit to each other, they can communicate with their brain signals to a computer. And a lot of applications of brain-computer interfaces for neural rehabilitation, exoskeletons and for robot controls.
That was also done in my lab. Now brain-computer interfaces are also used in applications in virtual reality and navigation, virtual reality for entertainment or for rehabilitation of stroke. This is a virtual reality system that helps people to move their hands through observing a virtual hand in a virtual
reality and using the so-called mirror neuron in their brain to activate the part of the brain that is damaged.
Computer systems now can recognize emotional faces along very well discriminating all these types of emotions with 94.3 percent accuracy. They can also recognize the emotion of the person who does the face expression as well.
Well emotional computing is now coming and it's a computer system that can learn and express attitude and emotions. A motivation for research is the ability to stimulate empathy. The machine should interpret the emotional state of humans and adapt the behavior giving the appropriate response to their emotions.
Computer systems now can have human face and what Mark Zader from the ABI, University of Auckland has chosen is the face of his baby. So this is the face of an emotional computing system and this is Mark Zader.
Precision medicine and precision health is a very rich area of research and funding. Precision medicine means that everybody deserves a model that will predict the outcome for this person in the best possible
way rather than using one model or one function or one formula for everybody anywhere in the world at any time. And precision medicine and precision health is based on building a model for a person based on the personal data and based on a data set that has many other personal data to select the best neighborhood and the best model for the best prediction of this person.
Personalized modeling devices and individual risk of event prediction is also a very active area where we are seeing portable devices that can predict certain events of human.
Here we have a prediction of stroke of individual one day to 11 days ahead and that is a personalized model that is built on both personal data and environmental data including solar eruption, pollution. That was done with the Nissan Institute in AUT and this model predicts 95 percent accurately one day ahead stroke on the population in Auckland.
Understanding human decision making is subject in the so-called neuro marketing and neuroeconomics. This is a current research that I do with my students where we
measure brain data to understand when people react on familiar versus unfamiliar objects. And we can see that a very deep learning is happening in the brain when people, the
different areas are activated, when people see familiar objects and that can explain versus the familiarity of the person even before 300 milliseconds which is considered to be the perception time of the stimuli. So here we have persons who perceive unfamiliar objects, you can see that not much happening
in their brains and that could be classified early, very early when the stimuli is presented. Applying AI in bioinformatics is a very active area when we have systems that extract patterns from gene expression, from
protein to define what is the pattern that can discriminate people with good prognosis versus people with bad prognosis of cancer. And computational neuron genetic modeling is part of bioinformatics where we have gene information
included in the computational brain models, that is also a very active area of research. AI for our audio visual information processing is indeed now having a boom with the deep learning machines where we have systems for fast moving object recognition using autonomous vehicles, surveillance systems, cyber security, military applications and we
have systems that can recognize very quickly the movement on the road and classify this movement with very good accuracy. Enhancing human prosthetics, well artificial intelligence, human prosthetics sometimes give some information but it is not precise
information and we can enhance the information given to the person from the human prosthetics, from the eye prosthetics with some artificial intelligence that analyzes the information and gives either verbal information to the person.
So prosthetics are of course a very active area and robotics of course is something that is in many laboratories used to demonstrate some algorithms and here we have the drone autonomous system that has also AI for image recognition and for decision making.
Driver assistance, well if you talk with the IBM Watson conversation services that offer one example of
driver assistance and I know that many groups work on these autonomous drivers and I did this experiment and I asked this system to say give me a, I said well stop at the gas station.
And the system gave me the map where there are gas stations nearby and the system said which one would you like to drive but I didn't want to drive to anywhere I said stop the car and the system said I don't understand. So well we have to be of course careful using these driver assistance systems but a lot of progress is being done in this respect.
AI in finance, now we are having automated trading systems which are autonomous
robots on the internet so there are quite a lot of autonomous trading agents. We don't have people sitting in a big room trading, no these are autonomous robots that do the trading and this is a very fast joint area at the moment.
AI for ecological data modernity and event prediction, well that is quite a lot to be done to take data from ecology and to be able to predict some events like the establishment of harmful species that was done with University of Lincoln.
Or to use multisensory streaming data to evaluate the pollution of the area and this is the area of Vancouver and the research was done with a group of David Williams from University of Auckland. Predictive data modeling for streaming data is now becoming very useful in telecommunication, in milk volume prediction, in wind energy
prediction and you can't imagine how much the wind energy systems and farms lose with a bad prediction of future energy. So this is something that has a good place in New Zealand.
Seismic data modeling and this is New Zealand with all the seismic sites and you can see at the moment what is happening as seismic activities that despite this means that there is a seismic change in this particular seismic center and the connection means that after that there was a seismic activity in another one so there is a temporal relationship.
And there are some good progress, there is a good progress in this respect in New Zealand. AI New Zealand, well this research started early. Here we have John Andre who is still in Canterbury, he is retired but he
published one of the first books, Thinking with the Teachable Machines, in Artidemic Press 77. Now we had development in computer interfaces, computer animation, neural networks, machine learning software and now we have a very active research in New Zealand in computer vision, natural language processing, evolutionary computation, robotics, emotional computing, distributed AI.
And I should say that this development of AI is not on a kind of empty space, it was built on the general computer information sciences capabilities of New Zealand due to some pioneers in this area like Brancourt, Bob Dorham here, Salis and other.
AI New Zealand, at the moment we have applications, quite a lot of applications across areas of medical devices, healthcare, transportation, precision agriculture and I have a long list of applications that I could find.
But it is not a long list, it is a short list, I think there are many other companies which use artificial intelligence at the moment even though artificial intelligence in law has been developed at the moment. There is an AI forum in New Zealand to discuss the future of New Zealand AI.
And Interact is just one project of artificial intelligence for big data technologies in New Zealand that consists of generic technologies along with domain specific technologies in different areas along with projects and products that are planned to be developed.
The future of AI, well that's a big question, is it artificial general intelligence, means machines that can perform any intellectual tasks that humans can do, is it technological singularity, machines become super intelligent that they
take over from humans and develop on their own beyond which point the human societies collapse in their present form. Which may ultimately lead to the perish of the humanity or it is a tremendous technological progress, early disease, prognosis, diagnosis, robots improve productivity.
Well Stephen Hawking said I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer. AI will be able to redesign itself at an ever increasing rate.
Humans who are limited by slow biological evolution couldn't compete and could be superseded by AI. AI could be either the best or the worst thing ever to happen to humanity. Well do we accept that or not, well it is of course a matter of discussion.
My view is that the future is in the symbiosis between the human intelligence and artificial intelligence for the benefit of the humanity. Being at the same time aware of the potential risk for devastating consequences if AI is misused.
And there will be another lecture, lecture number four by Ian Watson talking about ethics of AI. Well now we should talk about natural intelligence and questions are would AI help to improve our human intelligence?
Will reading books improve the intelligence of our IQ as Jim Flynn suggested? Will mindfulness help? Will brain prosthetics help?
Or we need to listen more often to Mozart's music because there is some study that Mozart's music being in the alpha waves is very similar and stimulates human creativity. Maybe we need to listen to Mozart's music but I don't have much time I will stop it here.
So you can hear it at home I'm sure you have some. And I should say that this work in artificial intelligence and my work personally has been supported by AUT.
And also I would say that Marie Curie is one of the first women in science and I was funded by this European Union founding. The University of Portland and the IT Professional Institute I would like to acknowledge them.
And I would like to acknowledge my lab here which most of them are here people. And I would like to acknowledge and my love and thanks go to my family, my wife who has been with me for a long time. And in this particular case she helped me to reduce significantly the number of the slides. And also my daughter in Scotland and Asia in Switzerland who are I believe watching this live presentation.
Thank you very much.
Thank you Nick that was a great way to start off the series. We've got enough time for a few questions so if you've got a question stick your hand up. Thank you very much sir for impressing on us the enormous extent to which human abilities can be extended by machines.
I'm wondering is there any overall limitations. For example we have two sorts of knowledge in the world. Objective knowledge and the way the external world we perceive works. This is science I'd say.
And then we also have subjective knowledge and our own reaction. It's self, touch, it's place, it's personal experience, it's time and so forth. And I think that Bertrand Russell held the view that all our knowledge, all objective knowledge is dependent on our subjective experience.
There seems to be an enormous difficulty in going from our objective scientific knowledge to understanding our subjective knowledge. Do you have a response to that problem?
Well the question is there is objective knowledge and the subjective knowledge in our brain. How does objective knowledge matches the objective knowledge and is artificial intelligence helping in this respect?
Well I should say that this is a philosophical question. I'm not a philosopher of AI, I'm more scientist. But my answer would be as an approximate answer to this is that knowledge is, we talk about knowledge as only subjective.
It resides in the human brain. Our knowledge about the objective nature, the objective so-called knowledge changes all the time, evolves all the time and it has to be the case. We are no more in the time of Aristotle when knowledge was fixed forever. That is what he told.
Now we talk about knowledge that is evolving that never ends to improve it, to adapt it, to make life better based on this knowledge. So there is always improvement, adaptation of our subjective knowledge of the objective environment, objective nature.
Well that is my kind of modest response to this large philosophical question that probably somebody can answer better. Which side?
Yes, of course, yes, that will be available online. You can also approach me on this email address here and I can send it to you as a personal copy if you like.
Would you like to respond, would you like to give a better response to that? Thank you for helping me elaborate the answer, thank you.
Of course, the symbiosis by symbiosis. The question is, is symbiosis, does it mean man and machine work together?
Yes, it does and it is up to us to make it happen and that is how I see the future of AI in the symbiosis with our human intelligence, enhancing it, helping us with cognitive tasks. That has been done by many people so we have more time to develop our human intelligence, to develop new technologies and improve life.
Yes, definitely symbiosis is a co-working systems, the human and the AI.
Well, I wouldn't say at the same level, I think we are the driver of this symbiosis, we the humans. And that is my belief and it is opposite to what other people talk about the technological singularity that the robots will be driving that.
I think it's up to us to make it happen or to lose the game. Do you think the answers that you want to get?
Yes, the question is, is artificial intelligence, how different artificial intelligence is from the process of making computers faster? Making a fast computer doesn't mean that it is artificial intelligence, that is the question.
Well, the answer is no, artificial intelligence, we look at faster machines, they work fast at the level of searching data and data truncheon rather than learning, generalization, making hypothesis, planning prediction.
Of course you can say, well the chess player, the machine defeated Kasparov and it was not quite artificial intelligence, it was a fast machine which had some elements of artificial intelligence but mainly it checked many many steps ahead what would be the game.
So, having fast machines can help artificial intelligence but it doesn't constitute artificial intelligence. That's probably a good point to stop. Nick, we've got something here that isn't artificial intelligence but it will change your mind state.
Well, if it excites my brain that will be good, thank you, thank you very much. Thank you, thank you very much and next week's lecture is Robert Hans Gusken who is going to be talking on Home Smart Home. Come along, see you then.