We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Heidelberg Lecture: On the Nature of Computing

00:00

Formal Metadata

Title
Heidelberg Lecture: On the Nature of Computing
Title of Series
Number of Parts
340
Author
License
CC Attribution - NonCommercial - NoDerivatives 4.0 International:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Computing is a domain of knowledge. Knowledge is truthful information that embedded into the right network of conceptual interrelations can be used to understand a subject or solve a problem. According to this definition, Physics, Biology but also Mathematics, Engineering, Social Sciences and Cooking are all domains of knowledge. This definition encompasses both, scientific knowledge about physical phenomena and engineering knowledge applied to design and build artefacts. For all domains of knowledge, Mathematics and Logic provide the models and their underlying laws; they formalize a priori knowledge which is independent of experience. Computing with Physics and Biology is a basic domain of knowledge. In contrast to the other basic domains, it is rooted in a priori knowledge and deals with the study of information processing – both what can be computed and how to compute it. To understand and master the world, domains of knowledge share two common methodological principles. They use abstraction hierarchies to cope with scale problems. To cope with complexity, they use modularity. We point out similarities and differences in the application of these two methodological principles and discuss inherent limitations common to all domains. In particular, we attempt a comparison between natural systems and computing systems by addressing two issues: 1) Linking physicality and computation; 2) Linking natural and artificial intelligence. Computing and Physical Sciences share a common objective: the study of dynamic systems. A big difference is that physical systems are inherently synchronous. They cannot be studied without reference to time-space while models of computation ignore physical time and resources. Physical systems are driven by uniform laws while computing systems are driven by laws enforced by their designers. Another important difference lies in the discrete nature of Computing that limits its ability to fully capture physical phenomena. Linking physicality and computation is at the core of the emerging Cyber physical systems discipline. We discuss limitations in bridging the gap between physical and discrete computational phenomena, stemming from the identified differences. Living organisms intimately combine intertwined physical and computational phenomena that have a deep impact on their development and evolution. They share common characteristics with computing systems such as the use of memory and languages. Compared to conscious thinking, computers are much faster and more precise. This confers computers the ability to successfully compete with humans in solving problems that involve the exploration of large spaces of solutions or the combination of predefined knowledge e.g. AlphaGo's recent winning performance. We consider that Intelligence is the ability to formalize human knowledge and create knowledge by applying rules of reasoning. Under this definition, a first limitation stems from our apparent inability to formalize natural languages and create models for reasoning about the world. A second limitation comes from the fact that computers cannot discover induction hypotheses as a consequence of Gödel's incompleteness theorems. We conclude by arguing that Computing drastically contributes to the development of knowledge through cross-fertilization with other domains as well as enhanced predictability and designability. In particular, it complements and enriches our understanding of the world with a constructive and computational view different from the declarative and analytic adopted by physical sciences.
Turing testInformation and communications technologySurface of revolutionAreaMathematicianComputational intelligenceMiniDiscTheory of relativityBusiness informaticsLecture/Conference
Information and communications technologyDisintegrationDirected setPredictabilitySystem programmingComputerArtificial neural networkObject (grammar)Machine visionBusiness informaticsAreaEvoluteService (economics)Process (computing)Surface of revolutionInternet der DingeObject (grammar)Game controllerArtificial neural networkPhysical systemVideo gamePredictabilityINTEGRALComplete metric spaceComputational intelligenceLecture/ConferenceMeeting/Interview
Pattern recognitionMathematicsInformationstheoriePhysicistNobelpreis für PhysikVideo gameComputational intelligenceLevel (video gaming)Factory (trading post)CognitionOcean currentPattern recognitionPhysicistLecture/ConferenceMeeting/Interview
Pattern recognitionPhysicistNobelpreis für PhysikInformationstheorieMereologyMathematicsComplex (psychology)Task (computing)Term (mathematics)Category of beingChannel capacityProcess (computing)ComputerTheorySinguläres IntegralMathematical analysisImplementationDomain-specific languageAlgorithmObservational studyPhysicalismComputational intelligenceNatural numberDigital libraryArtificial neural networkFormal languageRepresentation (politics)Data storage deviceSpacetimeNatural numberDomain-specific languageArtificial neural networkGame theoryMultiplication signFood energyNobelpreis für PhysikView (database)Computational intelligenceFormal languageData structurePhysicistNumerical analysisSemiconductor memoryArithmetic meanRepresentation (politics)Data storage deviceMobile appParticle systemContent (media)Genetic programmingPerimeterTheory of relativitySet theoryComputer fontPhysical lawScripting languageTerm (mathematics)Interactive televisionGraph coloringComputer scienceDemosceneCodePresentation of a groupPositional notationReduction of orderWebsiteVideo gameMaizeMachine visionInstance (computer science)SpeciesLie groupComputer animation
InformationstheorieRepresentation (politics)TheoryMessage passingAlphabet (computer science)Equivalence relationFormal languageMultiplication signAreaWeightBusiness informaticsNobelpreis für PhysikConstraint (mathematics)Characteristic polynomialComputability theoryData structureMeasurementBitDomain-specific languageOrder (biology)TheoryAddressing modeFile formatTheory of relativitySemantics (computer science)PhysicistLecture/Conference
AlgorithmSemantics (computer science)MeasurementEntropie <Informationstheorie>Data structureFunctional (mathematics)AlgorithmOrder (biology)Business informaticsRepresentation (politics)Computational intelligenceBinary codeAdditionGenetic programmingSummierbarkeitResultantMachine visionArithmetic meanProgramming languagePresentation of a groupBoom (sailing)Survival analysisFormal languageMathematicsPhysical lawInstance (computer science)Lecture/Conference
Physical lawInformationstheorieMathematicsTheoremLimit (category theory)Physical systemNumber theoryStatement (computer science)Arithmetic meanGenetic programmingPhysical lawMathematicsStudent's t-testTheoryResultantComputational intelligenceMathematicianElementary arithmeticPartition (number theory)SphereLecture/ConferenceComputer animation
TheoremPhysical lawInformationstheorieMathematicsLimit (category theory)Physical systemNumber theoryStatement (computer science)Kolmogorov complexityRead-only memoryAlgorithmDomain-specific languageData dictionaryObservational studyFormal verificationStandard deviationSubsetFunctional (mathematics)LogicBusiness informaticsLimit (category theory)Computational intelligenceUncertainty principleKolmogorov complexityPhysicistSocial classStandard deviationApproximationSequelResultantMathematicsComputer programmingAlgorithmTuring testSemiconductor memoryMultiplication signFocus (optics)Lecture/ConferenceComputer animation
Data dictionaryPhysical lawDomain-specific languageObservational studyFormal verificationStandard deviationInformationstheorieMathematicsTheoryLogicIndependence (probability theory)Natural numberValidity (statistics)Degree (graph theory)AbstractionTheory of relativitySeries (mathematics)Computational intelligenceProcess (computing)WissenserwerbAnalytic setNobelpreis für PhysikComputational intelligenceMathematicsFormal languageDomain-specific languagePerspective (visual)Instance (computer science)ResultantComputer scienceCombinatory logicNichtnewtonsche FlüssigkeitSlide ruleComputability theorySoftware developerTheory of relativityPhysicistIndependence (probability theory)FamilyNobelpreis für PhysikGroup actionInteractive televisionLogicMoment (mathematics)Lecture/Conference
Computer wormMathematicsInformationstheorieCybersexProcess (computing)Domain-specific languageTransformation (genetics)Representation (politics)Data transmissionNatural numberObservational studyTranslation (relic)Artificial neural networkSystem programmingImplementationFundamental theorem of algebraType theoryFormal languageComputational intelligenceResultantLogicDomain-specific languagePhysical systemMathematicsCivil engineeringExtension (kinesiology)Transformation (genetics)Model theoryObservational studyProcess (computing)WordPhysical lawNobelpreis für PhysikArtificial neural networkShooting methodState observerFile formatTranslation (relic)Video GenieComputerSocial classLecture/ConferenceMeeting/InterviewComputer animation
Observational studyLengthKolmogorov complexityAbstractionLevel (video gaming)Planck constantUniverse (mathematics)Domain-specific languageDijkstra's algorithmHierarchyPhysical systemModel theoryFinitary relationPhysical lawLink (knot theory)TheorySoftwareVisual systemVirtual machineEnterprise architectureSystem programmingModul <Datentyp>Complex (psychology)Type theoryAtomic numberComponent-based software engineeringElement (mathematics)KompositionsoperatorVotingClassical physicsNobelpreis für PhysikComputer programPositional notationMaizeExecution unitProcess (computing)Rule of inferenceParticle systemPhysicistFundamental theorem of algebraFormal languageFlow separationTransformation (genetics)Water vaporTheoryCategory of beingPhysical lawAtomic numberContext-sensitive languageProcess (computing)PhysicistComponent-based software engineeringKolmogorov complexityPhysical systemModul <Datentyp>Computer hardwareLogicRule of inferenceScaling (geometry)HierarchyAbstractionNobelpreis für PhysikSoftwareHelmholtz decompositionCASE <Informatik>Theory of everythingReduction of orderLengthGoodness of fitUniverse (mathematics)Different (Kate Ryan album)Domain-specific languageMathematicsCartesian coordinate systemOrder (biology)Computational intelligenceInstance (computer science)System callProper mapProduct (business)Asynchronous Transfer ModeComputer clusterArmSelectivity (electronic)Computer programmingElement (mathematics)State observerError messageGroup actionConfidence intervalLecture/ConferenceComputer animation
InformationstheorieDomain-specific languagePredictabilityKolmogorov complexityComputational intelligenceNatural numberPhysicalismModel theoryComputer simulationLink (knot theory)SpacetimePopulation densityLogicAlgorithmDigital libraryUniverse (mathematics)Nobelpreis für PhysikAnalogyProcess (computing)QuantumComputerCellular automatonNumber theoryPhysical lawMathematicsTime evolutionTuring testVirtual machineParabolaUniform convergenceTheoryAnalytic setParallel computingConcurrency (computer science)Business informaticsComputer simulationKolmogorov complexityResultantLie groupSoftwareDifferential equationNobelpreis für PhysikPredictabilityParabolaType theoryNatural numberModel theoryQuantumComputational intelligenceLatent heatOrder (biology)Projective planeParameter (computer programming)Slide ruleExtension (kinesiology)BuildingPhysical systemState observerImage resolutionData storage deviceComputational complexity theorySpacetimeAnalog computerTuring-MaschineLimit (category theory)Quantum computerUniformer RaumMetreDifferent (Kate Ryan album)Product (business)ComputerTheoryMultiplication signWorkstation <Musikinstrument>System callComputability theoryElectric fieldWritingState of matterDirection (geometry)AreaNatural languageInstance (computer science)Arithmetic meanPower (physics)Form (programming)Nichtlineares GleichungssystemDynamical systemFunctional (mathematics)Line (geometry)View (database)Virtual machinePersonal digital assistant
System programmingAerodynamicsPhysicalismNobelpreis für PhysikElectric currentoutputDiskrete MathematikFunction (mathematics)Variable (mathematics)Model theoryPhysical systemDeclarative programmingPhysical lawComputational intelligenceNobelpreis für PhysikDifferent (Kate Ryan album)Computer programmingMultiplication signSpring (hydrology)Greatest common divisorState of matterPrime idealDerivation (linguistics)Dynamical systemForm (programming)Cartesian coordinate systemDifferential equationOcean currentConservation of energyPhysical systemVirtual machinePhysical lawInvariant (mathematics)Euklidischer RaumFunctional (mathematics)Set theoryMassNichtlineares GleichungssystemTheory of relativityInstance (computer science)Ultraviolet photoelectron spectroscopyProcedural programmingSimilarity (geometry)Level (video gaming)outputPhysicistCross-correlationConservation lawGoodness of fitBusiness informaticsCollaborationismExpert systemFerry CorstenLecture/Conference
Arc (geometry)System programmingAerodynamicsoutputElectric currentVariable (mathematics)Function (mathematics)Diskrete MathematikModel theoryPhysical systemPhysical lawPhysicalismProcess (computing)SequenceEvent horizonHypothesisInductive reasoningComputer simulationTheoremComputerNatural numberProgramming paradigmComponent-based software engineeringAdvanced Boolean Expression LanguageFactory (trading post)PlastikkartePrototypePoint cloudLibrary (computing)Virtual realityAssembly languagePartial differential equationRelational databaseLinear mapNonlinear systemAbstractionTheoryDisintegrationMultiplicationScale (map)Nobelpreis für PhysikComponent-based software engineeringFactory (trading post)Computational intelligenceComputer simulationDirection (geometry)Letterpress printingGraphical user interfaceExtension (kinesiology)Military basePrototypeCharacteristic polynomialBusiness informaticsMultiplication signLibrary (computing)Inductive reasoningSequenceLimit (category theory)Mathematical modelDifferent (Kate Ryan album)WebsiteSurface of revolutionProjective planeOrder (biology)Computer programmingTheoremProcess (computing)Physical systemEvent horizonCross-correlationObject (grammar)Point cloudBuildingAssociative propertyMathematical inductionGroup actionProgram slicingPort scannerType theoryCue sportsGoodness of fitMachine visionCyberneticsInstance (computer science)Physical lawInterface (computing)TheoryUniformer RaumEvoluteBoss CorporationSoftwareLecture/ConferenceMeeting/Interview
DemonArtificial intelligenceNatural numberSoftware testingArtificial neural networkTuring testMathematical analysisExecution unitComputerMach's principleGoogolComputerThermal expansionProgrammer (hardware)InformationstheorieSystem programmingComputer simulationGroup actionAxiom of choiceComputational intelligenceNatural numberBusiness informaticsOrder (biology)Computer chessSequenceElectronic mailing listPattern matchingArtificial neural networkSoftware testingGame theoryRepository (publishing)Differential equationMultiplication signSpacetimeGoogolWordGenetic programmingType theoryGoodness of fitSupercomputerTerm (mathematics)Pattern languagePhysical systemInstance (computer science)Mechanism designPairwise comparisonIdentical particlesDemosceneParameter (computer programming)Data storage deviceProcedural programmingDigital photographyProcess (computing)
System programmingComputer simulationGroup actionAxiom of choiceNatural numberArtificial neural networkModel theoryFormal languageRule of inferenceCategory of beingHierarchyCommon sense reasoningComputerRow (database)Computational intelligencePlanningBusiness informaticsComputer simulationAxiom of choiceCovering spaceTranslation (relic)Natural languageOrder (biology)Formal languageTelecommunicationMultiplication signState of matterGoogolBuildingSequenceLecture/Conference
SequenceRule of inferenceType theoryLogicPredictionNatural numberArtificial neural networkComputer simulationComputerProcedural programmingMathematicsMotion capturePhysical lawMechanism designPhysical systemComplex (psychology)Kolmogorov complexityCognitionLimit (category theory)Finitary relationObject (grammar)Temporal logicProcess (computing)Set theoryCorrelation and dependenceDomain-specific languageParameter (computer programming)Model theoryTheoryAbstractionCross-correlationCausalityUniverse (mathematics)CybersexMathematical singularityExponential functionComputer hardwareHypothesisVirtual machineRead-only memoryChannel capacityInheritance (object-oriented programming)Real numberPlastikkarteInequality (mathematics)Web 2.0Parameter (computer programming)State observerPhysical lawLimit (category theory)PhysicistComputational intelligenceDifferent (Kate Ryan album)GoogolKolmogorov complexityPropagatorComputer simulationResultantSequenceBusiness informaticsStatisticsData storage deviceTheory of relativityPredictabilityFunctional (mathematics)CausalityCombinatory logicObject (grammar)Representation (politics)CognitionPoint (geometry)TheoryModel theoryCross-correlationProcedural programmingEvoluteCrash (computing)TwitterWordData analysisNatural numberVirtual machineComplex systemLine (geometry)Product (business)Nobelpreis für PhysikMobile WebGodMusical ensembleCASE <Informatik>Canonical ensembleValue-added networkTouchscreenUniverse (mathematics)AreaGoodness of fitMultiplication signPresentation of a groupLogicObservational studyThomas BayesPhysical systemWebsiteWeightBus (computing)Lecture/ConferenceMeeting/InterviewComputer animation
Real numberNatural numberArtificial neural networkPlastikkarteInequality (mathematics)RoboticsPhysical lawOrder (biology)RobotExistenceRevision controlSystem programmingMaizeDisintegrationPhysical systemFinitary relationObservational studyDomain-specific languagePhysicalismBuildingInformationstheorieSimilarity (geometry)ComputerProcess (computing)Read-only memoryFood energyComputerKolmogorov complexityPredictabilityDeclarative programmingTheoryImplementationTraffic reportingArtificial neural networkInequality (mathematics)Virtual machinePresentation of a groupNobelpreis für PhysikPhysical lawComputational intelligenceBusiness informaticsCyberneticsSoftware developerProof theoryRoboticsStudent's t-testPhysical systemProcess (computing)Metropolitan area networkExistenceMechanism designDomain-specific languageMultiplicationDifferential (mechanical device)Order (biology)Faculty (division)Different (Kate Ryan album)Kolmogorov complexityMachine visionObservational studyBoss CorporationImperative programmingMereologyMusical ensemblePoint cloudElectronic mailing listGoodness of fitForcing (mathematics)Differential equationImplementationLecture/ConferenceMeeting/InterviewComputer animation
VideoconferencingProduct (business)Distribution (mathematics)Content (media)Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
Good evening, ladies and gentlemen. It's a great pleasure and a privilege to give this Heidelberg lecture here in Lindau.
I'm going to talk about the nature of computing and computing in relation with other scientific disciplines. So I hope that my talk will trigger questions. OK, so let's start by saying that computing is a very young discipline because
the foundations have been set only in 1936 by a British mathematician, Alan Turing. And then computers have been used progressively in all application areas. And you know that today we have more than, I don't know, many billions of computers
deployed over the planet. And most of them are not accessible by humans. They provide services automatically. They control processes. And there is a vision around that. That's what we call the ICT revolution. And there is a vision that is called the Internet of Things.
You probably heard about that. So the Internet of Things is a technological vision that will allow objects to be sensed and controlled remotely by using a unified network infrastructure to achieve direct integration of the physical world
with computer-based systems. And the purpose is to improve efficiency and predictability. So you know that today humanity is facing problems about how to manage resources, how to improve quality of life because of the increase
of a population. So this dream is very important. And it raises important challenges also. And I think that when this vision will come through, our lives will be transformed completely. The way we live, we learn, we work.
So computing is something important in our everyday life. And I think that as a scientific discipline, it lacks recognition currently. And this is attested from the fact, for instance, that it's not very present.
Or computing is considered as a technology and not really as a scientific discipline. So there is this lack of recognition. And also, I would like to say that as a member of academies, I had the opportunity to discuss with top scientists
from other disciplines. And I think they have a very poor idea about what computing is, especially I had the discussions with physicists. And I think that physicists are obsessed by this reductionist view of the world.
I try to understand the world, perhaps rightly, as the game between particles and interaction between particles. And they try to explain everything in terms of that. And I think that computing and information cannot explain
taking this approach. And this is something I will explain in my talk. And here I have some quotes from famous scientists. I'm not giving their name, but you can guess who they are. And that show that they totally ignore what computing is about and what information is.
You can contemplate the last quote. So somebody who believes that by putting the contents of his brain in some memory stick will survive after his death. So the question is, what is computing? I should say that even among computer scientists, there is no agreement about the scope
and the perimeter of the discipline. So here I'm giving you this definition by ACM. There is no broad agreement about that. So what is information? What is knowledge? Because knowledge is an important concept in computing. What is computation? And other important questions are,
how computing is related to physical sciences? And also, of course, the very hot question of the relationship between artificial and natural intelligence. So in my talk, I will give my personal opinion about that. And I will try also to explain what are the open questions.
And the purpose is rather to sensibilize people and create this curiosity that will allow you to understand further what happens.
So this is an outline of my talk. I will try to explain what is information, then what is computing, domains of knowledge. And then I will discuss the relationship between physicality and computation, artificial and natural intelligence. So first of all, what is information?
I explain what is information is as hard as to explain what is energy in physics somehow. So information, for me, is a relationship between the syntax of a language and the semantic domain. So the semantic domain is a set of concepts. So it's in our mind.
So on the other hand, you have symbols. It's a structure that you can represent some of your concepts, your ideas. And you go back through the denotation. So if you look at the symbols, then you understand what's the meaning of that. So the concept is the number four. And then you may have different representations
of the number four in terms of symbols. And of course, but the concept four is unique. So information is something strange because it's just a mathematical relation between a structure and our mind. And so it's somehow in the mind of the beholder.
Here, for instance, this is a script in a non-deciphered language. And it's no information. So nobody understands what it is about. This is a Greek script. This is information for a physicist.
This is Maxwell's laws. And the app is information for everybody. So I would like to emphasize the fact that information is an entity that is different from matter and energy. Also, it's non-physical in the sense that it needs media for its representation,
but it's not subject to physical space and time constraints. That's very important to understand. May be difficult to accept by people who have a strong background in physical sciences. But if you don't accept this, then you cannot understand
what computers do because all the theory of computation is time ignorant. And just to give you an example, you have information in your brain. And if this information disappears, your weight, your physical characteristics
would be the same. But you will not be the same person. So information is exactly this, a structure. Now, there is a bit of confusion about the concept of information. And there is another concept that is used by physicists in particular,
syntactic information, what I call syntactic information. And this is just a quantity. While in my definition, information is a relation between a structure and the semantic domain, I'm not going to give details about that. You probably heard about Shannon's theory.
So please don't confuse syntactic information with information. And syntactic information, you probably know, this is related to entropy. So non-entropy, I mean, entropy is the disorder of a structure, measures the disorder of a structure.
So information, syntactic information, measures somehow the order of the structure. So I'm not going to give more details about that. An important concept when you deal with computers is algorithm. So what is an algorithm? An algorithm is a function that can be computed by computers. So we have concepts about functions.
We have mathematics. So for instance, we have a function that is the addition of two integers. How computers work? Very simple to understand. The sum of 5 and 7 is 12. You have representations of 5 and 7
in the computer language, binary representation. And what is an algorithm? An algorithm is a recipe of manipulation of symbols that gives the result. And when you have this result, you go back, you interpret this back, and you say, this is 12. So an algorithm is just a recipe. Computers manipulate symbols, but not really
understand what is the meaning of the symbols. Just they execute the recipes. Now to finish about information, I would like to say two things about the basic laws of computation. Because in principle, I think if we change our education
curricula, these are things that students should learn in high school or even in elementary school. There is an important result in mathematics known as Godel's incompleteness theory. Godel was an Austrian mathematician
who in the beginning of the 30s, he published a very important result that says that, roughly speaking, among all the possible functions on integers, there is only a very small subset that is computable.
So most functions are non-computable. And this is a kind of limitation of logic, in fact. It's a kind of uncertainty principle for mathematics. And during all my career, I've been dealing with non-computable functions, in fact.
Non-computable functions that I had to simplify to compute approximations of these functions. But most useful functions are non-computable, in fact. So typically, we don't have an algorithm that can decide if a program is correct, if it terminates. And this is an important limitation of computing,
at least as it has been defined by Turing. And another important class of results is about complexity. Complexity says that there is no freelance. If you want to solve a problem, you will need some quantity of memory, and you will need some time. OK, so just remember these results,
because this will be useful in the sequel. Now, what is computing? There has been a lot of discussion about the nature of computing. Of course, some people said computing is a science. Now, if I take a strict definition of science, I know that these definitions of science may vary from country to country.
But standard definition of science focuses on discovery of facts. So you study phenomena, and you try to discover what physicists may call laws. OK, according to that definition, computing is not a science. It relies exclusively on mathematics.
And it's not a science, but also engineering is not a science according to that definition. So instead of disputing about definitions or trying to enlarge definition, I will take a more general perspective here, and I will define what is a domain of knowledge.
So what is knowledge? Knowledge is information that is truthful, that can be used either to understand the subject or to solve a problem. I can give examples about that. Pythagorean theorem, so as a result of mathematics, can be used to solve problems.
So this is knowledge. But also, cooking is a domain of knowledge, because recipes tell you how to build artifacts that are meals. Medicine is a domain of knowledge. OK, now I would like to explain
this concept of truthfulness, because it's very important to see the distinction between computer science and physics, for instance. Immanuel Kant made a distinction about the nature of knowledge. He talked about a priori knowledge and a posteriori knowledge. What is a priori knowledge?
A priori knowledge is knowledge independent of our experience. So typically, mathematics, logic, but also the theory of computing is a priori knowledge. It's a priori knowledge because it will be eternally true, provided, of course, that we accept the truthfulness of the axioms. A posteriori knowledge depends on our experience.
So physicists observe facts and find the relations, find laws, and generalize these laws. And this is a physical law. But physical laws, as you know, can be falsifiable. There is a famous result, Newtonian physics,
that has been proven not exact enough by relativity theory. So I think it's important to understand this distinction, because computing is sitting on the side of mathematics and that's all. Now, just summarize about knowledge.
Knowledge acquisition and development combines three things, science, engineering, and also mathematics, logic, linguistics. And I hope you can understand the interaction, but probably this slide explains
the relationship between them. So in this slide, I show the material world here. And this is heaven, information and knowledge. So knowledge is a particular type of information that, as I said, can be used to solve problems or understand the situation. So what does science?
Science does experiments on the material world and discovers laws. So the laws are knowledge, but in doing so, it uses, of course, mathematics and logic and all the existing results. And what does engineering? Goes now from heaven to the earth and brings all the human-built world artifacts,
the technical civilization we know. So there is a nice complementarity between science and engineering. And OK, this is the global picture, and I hope you don't disagree with that. Now, what is computing? Computing is a domain of knowledge. It has science, and in fact, it's
associated with many engineering disciplines. So it's a science to the extent that it studies information processes. And information processes may be artificial or natural. So people now want to see what we call a DNA translation as an information process,
or we talk about neural networks. And then, of course, we have the engineering facet. And the engineering facet is about design and computing systems, where we start from requirements and we build computing systems. OK, now let me say a few words about the domains of knowledge.
What are the basic domains of knowledge, in my opinion? So you have mathematics and logic that is standing, is providing all the models for the different scientific disciplines. And then you have the physical sciences, and I suppose chemistry is there with physics and other disciplines.
And this is about phenomena of transformation of matter and energy, clear? And then computing is about the transformation of information. And biology is sitting here aside. I believe biophenomena combine
physical, chemical phenomena and information transformation phenomena in a very intricate manner, and we cannot separate information transformation and the physical, chemical phenomena. So it's a discipline apart. And OK, so all these disciplines
share some common methodological approaches. You know reality has depth and breadth. And in order to understand, to cope with the complexity of the reality, we do some layering. We define abstractions. So to avoid misunderstanding, abstraction
is not an oversimplification of the reality. It's a simplification that reveals the relevant features of the observed reality. And for this, we are using abstraction hierarchies. Just to give you an example, so here I
am giving three hierarchies for physical sciences, computing, and bio hierarchy. Perhaps you may not agree with the composition I'm making, but usually we make such a decomposition to understand the universe we are studying.
And of course, the problem of each domain of knowledge is how to unify knowledge for different layers. So physicists are talking about the theory of everything, how to have a unified theory of everything. And probably for computing, we need something like that.
But I'm going to explain why this raises some difficulties. Now, all the domains of knowledge at each layer try to simplify the problem to cope with complexity.
And one approach is what I call modularity, or to assume that the systems we are examining are composed of atoms, of atomic elements, of bricks, of components. And components are glued together to build composite systems. So this is an idea that goes back
to Democritus, Democritus' atomic theory. But it is based on some assumptions. I'm not sure that everybody is aware of these assumptions. So any system can be considered as built of atomic components.
The behavior of each component can be studied separately. Then for composite components, we can infer their behavior from the behavior of their elements. And one very important assumption that in many cases is not respected is that the behavior of the components
when they are composed is not altered, or the challenges are predictable. So typically, for instance, in biological systems, but also in programming, the behavior of their components is altered when we compose them. And of course, in linguistic systems,
you have the phenomena we call context sensitivity. And the specific problem of computing systems is what we call component heterogeneity. So modularity, we don't have good theories for modularity. But I think that in chemistry, you have nice theory for modularity.
Now, once you have a theory for modularity at each level of abstraction, you would like to unify knowledge. And this is done usually by applying what we call a compositionality principle in logic. So is it possible to infer the properties at one layer
from the properties of components in the layer below? So typically, is it possible to infer the properties of water from properties of atoms of hydrogen and oxygen and the rules for their composition? Is it possible?
OK, I should say I have no technical opinion about that. I have discussed with famous physicists. And OK, it seems that it's fairly complex. And I have now a technical opinion about the fact that properties of an application software cannot be inferred from the properties of hardware.
If I let you examine how your hardware behaves, you cannot. It's not easy, because you have a change of scale by some factor, and it's extremely complicated. I'm not going to discuss this. And of course, another very interesting question is whether processes, properties of mental processes
can be inferred from behavioral properties of components of the brain. And this is also a very interesting problem. I think that these questions are of the same nature and will have no answer. And here, I would like to mention a paper that is very interesting, more is different, by Philip
Anderson, old paper published in Science, who says that the ability to reduce, so he considers that the reduction is a fallacy, and the ability to reduce everything to simple fundamental laws does not imply the ability to stop from those laws and reconstruct the universe. Think about that.
OK, now just to finish about domain of knowledge, let me explain how computers allow us to broaden, to push further the limits of knowledge. So I am a meteorologist. I am observing a phenomenon, and I would like to make a prediction.
So in any scientific, I mean the approach, the scientific approach consists in doing work. In first of all, building a model. So you build a model of what you observe, and then you have to do what? To check that this model is a faithful model of the reality.
So you do some experiments and check that the model does not disagree with the reality. Whatever you infer about the model holds for the reality. And then, now you may build models that are very, very complex. You will use computers to solve the model. So you may come up with models that cannot be solved by all
the computers of the world, or can solve only approximately. And this is the way you generate knowledge. Without computers, we would have only our pencil or whatever to analyze equations, and this would have been too hard. Remember, the Americans have built the first computer
to build the bomb, because to solve systems of differential equations. And then, in the opposite manner now, you have, so just summarize, predictability is limited by two things. What I call epistemic complexity, your ability
to find models that faithfully model the observed reality, and computational complexity. How hard it is to solve the models that you have invented. And now, in the opposite direction, now if I am an engineer, I have the needs, I can express in natural language,
and I want to build a complex artifact. So how I proceed, so my work is on the engineering side, so this is a type of problem I know very well. First of all, I'm facing linguistic complexity. So typically, for instance, if you have a large problem,
a large project in computing, you have specifications that are books, you have to read the books and build models, or write software. And then you have to synthesize the artifact. So you have now a problem that is to cope with linguistic complexity and computational complexity.
Okay, so this slide summarizes the two, and there are many other things to say, but unfortunately I don't have time. Now let me say something about linking physicality and computation, how computers can be related to physical phenomena. There are important differences, I said,
because physical phenomena cannot be understood without the concepts of space and time, and all the computing theory, the computation theory, is independent from time and space. And there are in fact two approaches for bridging the gap between the two worlds.
One is to assume that the physical world is a computer, what we call, some people call digital physics. And the other approach is to extend computing to encompass natural phenomena. So the first approach is digital physics.
So according to digital physics you say, well, we know that the world is discrete, okay, and it's a huge computer, and all the phenomena are the results of the computation of this computer. Okay, it's a nice metaphor.
I don't think that this is a realistic view, I'm not going to give arguments about that. The idea I like very much and I think is very, very promising is the other, to consider natural computers. So what is the idea behind natural computing?
Each well-understood physical phenomenon involves a computation, and this is described by the underlying physical law. So if I throw, say, a stone, it will describe a parabola. So I can consider that the stone, in fact, is a computer that computes a parabola, or here I'm giving an electron that moves
in some uniform electric field, it computes a parabola. So you use the ability of the natural to solve problems, so we talk about quantum computing, bio-inspired computing, analog computing. And here the approach is very, very different.
And okay, I believe this is a very, very promising approach. But in order to bridge the gap between the two, we need to extend Turing machines in some manner. So let me try to explain where lie the basic differences between physical science and computing. So assume that both, and this is true,
maybe deal with dynamical systems of the form X prime function of X and Y. So X prime is next state, conceptually, and X is a state variable, or a set of state variables, and Y's are inputs. So if I am a physicist, I or a chemist probably,
I will consider that X prime is the derivative of X with respect to time, and X is the current state, and X and current input, and X and Y are functions of time. Now, in computing, X prime is the next state of the program, and X is the current state.
But X prime and X are defined on discrete, are discrete variables, are defined on discrete values. And that's a big difference. So for instance, if I consider a simple, as simple as say a mass spring system, you have a differential equation that describes the dynamics, and you have a law,
that is the law of conservation of energy. And then I consider a very simple program. This program describes Euclidean algorithm, so computes the GCD of two integers, X and Y. But here you see, here you have an equation. You have declarative, here you have procedural. You tell the machine how to proceed
to compute the GCD of X and Y from the initial state X zero and Y zero. And here there is some similarity. This is a law for programs we call invariants when we are programming, we call this invariant. Now to summarize, we have physical system models, they are declarative.
When you write the relations, they hold for any value of time. And of course, the world is inherently synchronous. You have some correlation between the speeds of all the events that happen in the world. And the world is driven by uniform laws. The difference is that programs are procedural,
procedural, ignore physical time, and are driven by specific laws. Each time you write a program, you define the underlying laws, so the designer creates a world, and that's very interesting. Now, what are the main limitations
of computing as it is today? Computing cannot simulate physical processes like this one. So if you have an infinite sequence of converging discrete events, so if I have a ball here, I leave it to bounce on the floor, and I assume that at each shock, it loses some percentage of its speed.
So the ball will eventually stop bouncing, and this is a very trivial fact, but this process cannot be simulated by computers. Why cannot be simulated by computers? Because computers are discrete,
and they cannot compute infinitesimal quantities. So when engineers simulate such phenomena, and you have plenty of situations like that, they have to guess what will be the limit of this oscillation, okay?
And to guess what is the limit, you need induction. And okay, now it becomes technical, but believe me, computers cannot discover induction hypothesis, because if they can, then Goethe's incompleteness theorem does not work.
Okay, so this is a limitation using computers when we try to simulate physical phenomena like this. So I believe that natural computing may be a very interesting direction to model this phenomena to deal with this situation.
Now, let me talk about something else that also is related to linking physicality and computation. We are in Germany, and Germans talk about the fourth industrial revolution, and the heart of the fourth industrial revolution is this kind of systems that we call
cyber-physical systems. So what's the idea behind the use of cyber-physical systems? The idea is to extend 3D printing, so we understand what 3D printing is about. You build a model of an object, and you send this model to a printer, so for this object, and the printer will produce this object. So you may design the project here in your office,
and send the printer maybe in China, whatever. Okay, now we want to extend this idea and build systems, so you see here this motorcycle has been designed and 3D printed, okay? So even the electronic components we have here, okay?
So the idea is to use cyber-physical components, so not only components of which we know the electrical and physical characteristics, but also the computing characteristics. So this is a great idea, and this is, as I said, at the heart of the fourth revolution.
So typically you will have on the cloud tools to build the cyber-physical prototype of a car, and then you can run this, and you can design components like this. You can test them on this prototype, and send the components to a factory that will be somewhere else,
and so we send the mathematical model of the components, and this will be produced remotely to a factory that is somewhere else. Okay, so in the future, it will be possible for you to design your own car. So you go to the website of Daimler, for instance,
and you have libraries of components. You pick components, so you choose the engine, the wheels, whatever you want, and you combine them, you have a graphical interface to group them together, to compose them, and you build a virtual prototype of your Mercedes.
You run, so here this is a Tesla, it's not a Mercedes, okay, but, okay, so you run the virtual prototype, you see its performance, you are happy with that, you send it to Daimler, to the factory, so this is a prototype. This is interpreted by robots, or swan of robots, and they produce your car, you send a check with that,
and you get back your car, and the car you have designed, okay, so this is also a great revolution, creativity. Okay, now, in order to achieve this, this covers a challenge that is, in fact, bridging the gap between physics and computing, and this is a very interesting project funded by DARPA,
okay, DARPA funds military reserves in the US, so the aim of the project, the project is finished now, is to build this, this is an amphibian tank, from components, from libraries of components, and each component is a piece of software,
so you have tools to combine components and build the virtual prototype of your tank, you run the tank, you test that this is okay, and you send it to a foundry. So that's a great vision, but, of course, doing this raises very, very interesting problems about how you will integrate theories,
so how to compose mechanical features of mechanical, mechanical features, electrical features, fluidic, thermal, et cetera, all together, and to have a huge system of differential equations that you have to solve to integrate
in order to simulate the behavior of this artifact. Okay, now it's time to talk about the relationship between artificial and natural intelligence. Okay, so you've probably
had seen what happens in the media, people are saying that with artificial intelligence, okay, so Elon Musk, probably you know who is Elon Musk, he's a prominent engineer and millionaire also, the owner of Tesla, amongst others,
and you have also Stephen Hawking saying that, okay, it's a threat to humanity or other people, okay? So you have a big debate about that, and okay, let me give you my modest opinion about artificial intelligence at least
at the status of affairs today, okay? So I think that computers can surpass conscious human thinking in that they compute extremely much faster and with extremely
much higher precision. This is the great advantage of computers over humans. And of course, having these qualities, they have the ability to successfully compete with humans in solving problems like this, for instance.
So they can defeat Kasparov in a chess game, or you probably have seen IBM's Watson playing the Jeopardy game in the US, or more recently you have this achievement by Google DeepMind in the AlphaGo, okay?
So they are very good at solving these type of problems and I would like to say that this ability relies exclusively on the fact that they can perform computation very fast, very fast to explore huge spaces of solutions
and okay, they don't understand anyway. I mean, this is just super computing and pattern matching to use another technical term. So I don't think that, at least in order to compare human and artificial intelligence,
we should agree on a definition of what is intelligence. So there is a test proposed by Alan Turing that, okay, so Alan Turing wrote the paper beginning of the 50s, saying how we can compare the abilities of a computer with a human
and by doing some experiments. So it's a behavioral experiment. So you have a computer here, A, and the person, B, and you have an experimenter, C, that interacts online, so there is a wall between them
and the experimenter is asked to decide whether, okay, to see if he can distinguish between the behavior of the human and the computer. He does not know if A and B are computer or human. So this test has been criticized by many people
for many reasons and I don't believe also that it is a good criterion. John Seale, who is a philosopher, a professor at Berkeley, proposed the Chinese Room Argument, which is a thought experiment that shows that computers can manipulate symbols
without understanding what they are doing. So typically you have a computer that has the list of all the possible questions in Chinese and the corresponding answers. So he receives questions in Chinese, so by pattern matching he sees which,
so sequence of symbols matches in his repository and finds the corresponding answer. So the computer does not understand what he's doing, just manipulating symbols and words. And also I think that, okay, this is not a good test and in reality we don't have a good test
to decide whether computers are as intelligent as person, at least behaviorally. So my approach is the following. I have at least a criterion that I consider that humans exhibit general intelligence
and computers are not good at that. And let me explain what I mean by general intelligence. So humans can, so just to give you an example, if I have a computer that is good at playing Go,
okay, he's only good at playing Go because he solves one problem. The problem is how to combine, to be able to solve many different problems, and this is what our mind does, based on the fact that we have a semantic model of the external world.
It's, okay, I will need time to explain this or I don't have time, but what is consciousness? What does our mind, how our mind works? Our mind has the ability to see ourselves acting in this semantic model.
So we know the state of affairs and we have choices. Choice A, B, C. So for each choice, we contemplate ourselves acting, taking this choice, and we estimate what will be the consequences of this choice, okay. And so we have a model of the external world,
and this model, of course, is updated through experience and learning. So it's very hard to build such, to equip computers with such a model, because in order to do that, we will have to analyze natural language. And to analyze natural language,
unfortunately, we don't know how to do that. Of course, you have the translators proposed by Google and others of languages, but this does not mean that the translators have a model of the external world. I mean, analyzing the language and building a semantic model of the natural language is a very hard problem. We don't know how to solve it.
So just to, so computers today cannot apply common sense reasoning. This is the cover of the communication of the ACM. It's a special issue on common sense reasoning and common sense knowledge.
So you see that you may have a very clever robot, but you see what it is doing here, completely silly. And just to give you another example, this is an example I have devised myself. So if I show you this sequence of pictures here,
you will say, oh my God, this is a crash of aircraft. I mean, immediately. While if you give it to a computer, he will find that you have a bus, you have, I mean, can analyze the picture, but it's very hard for a computer to infer that there will be a crash because the computer should understand somehow,
okay, what we have a common knowledge in our mind through education, since our birth, we have acquired this and we have some model. So this is not a matter of deeply understanding physics. It's just, we have through education and experience,
we have this model in our mind. Now, I believe that computing, as it is today, it can approach some functions of our mind, that is what Daniel Kahneman calls slow conscious thinking.
Daniel Kahneman is an economist from Israel, and he proposes, he wrote a famous book, Thinking Fast and Slow, and there he says that in our mind, we have a combination of two machines, two computers,
slow and conscious, and fast and automated, or non-conscious. So most of our intelligence comes from fast automated thinking. So just to give you an example, if I ask you, what are you going to do when you leave from here, you'll say, I'll get out and do this, take my car. So this is procedural slow thinking.
But when I am talking, when I am playing the piano, the fast computer works. And if a pianist thinks about how to move his fingers when he plays a piece of music, then he will be completely,
I mean, he will do everything wrong, okay? And when you are walking also, your mind solves a very hard problem, and this is completely automatic. So my opinion is that computers, as they are today,
are suited for modeling slow, deliberate, analytical, and consciously effortful human reasoning. So when Aristotle discovered the laws of reasoning, he discovered them because he analyzed the way we reason.
So the computers, as they are today, they are based on logic, and it's about procedural thinking. Now, natural computing seems to be more adequate for studying fast thinking, and this is my take, this is what I believe.
And unfortunately, fast thinking is nonconscious, and it is impossible to understand and analyze the underlying mechanism and laws. Okay, I would like to say a few words about the limits of our understanding. It's very important to understand where we are limited,
when, especially when we are scientists and engineers. What it means to understand the situation. It means that we can connect a perceived relation between objects to our mental representation. So we should have a model in our mind, and the model will match with the perceived relation.
So in some cases, a complex system is hard to understand, not because it's not subject to laws, but because its complexity exceeds what we call our cognitive capabilities. So how much we are limited as humans.
There are some experiments, interesting experiments about that. So this is determined by what we call the cognitive complexity of a model. And psychologists say that the limit in the sight of the relations the human mind can deal with is over the rank of five.
So say five parameters, just to simplify this. And the humans, to break the complexity, they are using different approaches, abstraction, modularity, and segmentation. So there are limits to our understanding. So now an interesting question is how computers help us to push the limits
of our understanding further. So in most theories we develop, you have only a few parameters. I mean, you have five or six parameters, any scientific theory, I mean any formal theory, you are limited. And this is because of this problem
of cognitive complexity. Now with computers, we can go further. We can build empirical models that combine theoretical models and ad hoc models, and do experiments, and this is very helpful. And of course, you've heard about big data analytics.
Big data analytics is an important trend now in computing. And the idea behind is very simple. You collect huge amounts of data, and you apply statistical analysis techniques, machine learning, and the purpose is to find correlations between different parameters.
Of course, the parameters should be carefully chosen. And this allows you to guess, to find the market trends, disease propagation trends, or whatever. And this allows you, in fact, to have prediction without understanding, because of course, computers do not understand.
So of course, in doing this, you find correlations, and you understand that correlation is not a causality relation. So in some cases, you may suppose that one event, one thing is the cause of the other,
and this may not be true. So but this is, there is a long discussion in the computing community about that. And you have people who defend strongly this idea of big data analytics, and believe that we can go very, very far in prediction.
This will be, but should be clear that will be prediction without understanding. So I can tell you a story about that. I have a friend, he's a seismologist. And two years ago, he approached me, and he told me, Joseph, I have a problem, because Google predicts better than I can do earthquakes.
And this, I believe, may be true. Why? Because Google, they have many, many points of observation over the planet, and they can find correlations, and in some cases, they may be more successful.
Okay, so in many cases, this is an approach that can be useful. And there are people who believe that we should develop a kind of web science. So the web is a universe, so it's like physicists doing experiments, and discovering laws. So it's a movement in computing.
I don't believe that we will get something interesting from that, because the world as it is is very rich, and it's the result of evolution of millions of years, so I don't think that we'll get something interesting. Now, just to finish my presentation, I would like to say a few words about all of the discussion you probably heard
about singularity. What means singularity? Singularity, or technological singularity, is the idea that at some point in this evolution, computers will become more intelligent than humans based on arguments that are completely,
I mean, non-sound arguments. They make some computation, they say around 2045, okay, computers will become more intelligent, and this is a great danger. I don't believe that this can be taken seriously
by scientists, but it is sad that all these ideas that are purely speculative are taken by the media seriously, and they are propagated. That's really a pity. Now, there are real dangers, and I would like to mention two dangers. One is that artificial intelligence and automation
will have a deep impact on wealth inequality. We will have unemployment, this is for sure. There are very interesting reports about that. The second is that people are worrying about machines that are too smart, and this distracts from the real
danger that the present threat for machines come from the fact that they are stupid, as I like to tell my students. Machines are stupid. They will be doing what they are instructed to do, okay? And probably you heard about ASIMO's laws of robotics,
so a good robot should obey these laws. These laws may not injure a human being or its boss, and must obey the orders, and finally, the robot must protect its own existence. So this is a good robot. Bad robots may violate one of these laws.
Now, I think that ASIMO's laws may be violated by existing systems that are complex and mindless, and there is a danger today, okay? Because if the system that controls the nuclear plant is not well designed, or we can have other problems,
you heard yesterday a huge cyber attack has invaded plenty of systems and caused a lot of harm.
So, I think that the problem comes from the fact that we have machines that are too smart, are not safe or secure enough, and we have the proof of that, and also that these machines
are controlled by a small minority. Okay, now I can close my presentation here. What I have tried to explain is that computing is a distinct domain of knowledge, and of course, it is very important that computing,
we bridge the gap between computing and physical sciences, and this raises very, very interesting problems, and I think also this will be very useful if we want to build, for instance, cyber physical systems and meet the IoT challenge.
Also, computing will have a very deep impact in the development of knowledge, and this discovery is as important as the discovery of mechanical tools. So remember, when man has discovered the wheel, or fire, or tools, he multiplied its muscular force,
and now computers allow multiplication of our mental faculties, and this is very important. But we should not forget that as an aircraft is not a bird, the computer is not a mind, okay? So the problem is how to make computers more intelligent
and to exploit this nice complementarity between computers and humans. Also, computing has revealed the importance of design. Design is becoming very important. It's becoming as important as scientific
development of science, and it's very important that we study design as the process that goes from the needs, from requirements, to implementation. Also, computing has revealed the importance of knowledge and its cross-fertilization
to achieve enhanced predictability, and this is a fact day. And finally, I would like to emphasize the fact that computing reaches the way we understand the world. In physical sciences, we understand the world by laws, so laws are declarative.
You say nature behaves like that. You give differential equations, it's declarative. And computing is constructive and imperative, so you have two different ways to look at reality,
and this is very interesting, I believe. So this is the end of my talk. Thank you very much for your attention, and if you have questions, I am available now, but also tomorrow if there are people interested in discussing these ideas. Thank you very much.