We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Unveiling the Universe with python

00:00

Formal Metadata

Title
Unveiling the Universe with python
Title of Series
Part Number
87
Number of Parts
169
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Valeria Pettorino - Unveiling the Universe with python I will describe a scientific application of python in the field of Astrophysics and Cosmology. How the publicly available package Monte Python is used to compare data from space satellite missions with theoretical models that attempt to describe the evolution and content of the Universe. The result is surprising, as it points towards a Universe which is mainly dark. ----- Python is widely used in Cosmology, which is the study of the Universe and all forms of energy in it. A large amount of data has been recently obtained through space satellite missions, such as Planck, financed by ESA/NASA. Planck has observed the radiation emitted about 13 billion years ago (the Cosmic Microwave Background, CMB), which gives us information on the content and space-time geometry of the Universe. Many competitive theoretical models have been proposed that aim at describing the evolution of the species contained in the Universe: therefore, cosmologists need a method to identify which theoretical model better fits the data. In order to compare data with theoretical predictions, cosmologists use Bayesian statistics and Monte Carlo simulations. Among the tools developed for the analysis, the package ‘Monte Python’ is publicly available and uses python to perform Monte Carlo simulations: this allows to determine the theoretical model that maximizes the likelihood to obtain the observed data. Such model is now the standard cosmological model and reveals a Universe that is very different from what scientists had ever expected. A Universe in which the atoms we are made of, constitute only 5% of the total energy budget. The rest is the so-called ‘Dark Universe’. I will illustrate the story of how cosmologists used python to analyse the data of the CMB and unveil the Dark Universe.
11
52
79
Red HatUniverse (mathematics)Goodness of fitUniverse (mathematics)AreaValidity (statistics)Data conversionView (database)Execution unitComputer animation
Time evolutionUniverse (mathematics)TorusSatelliteInternet der DingePhysicistTelecommunicationSpacetimeProjective planeMultiplication signAstrophysicsContent (media)Observational studyUniverse (mathematics)Planck constantComputer programming3 (number)Evolute
Computer programComputer networkMereologyWorld Wide Web ConsortiumEmailGravitationDataflowComputer programmingScaling (geometry)QuicksortDistanceSign (mathematics)Gravitational waveDifferent (Kate Ryan album)AstrophysicsDimensional analysisDirection (geometry)MetreAtomic numberComputer simulationState of matterCategory of beingPhysicalismMereologyWeb 2.0Special unitary groupDomain nameSoftwareBootingSpiralBitNuclear spaceField (computer science)CollisionComputer animation
Slide ruleUniverse (mathematics)SpiralMetrePower (physics)Point (geometry)Noise (electronics)Digital photography
World Wide Web ConsortiumCollaborationismSpacetimeWeb 2.0
Universe (mathematics)Internet forumThermal expansionGravitationContext awarenessAdditionArchaeological field surveyWeightWeb 2.0Thermal expansionMultiplication signGravitationDistanceUniverse (mathematics)Spacetime
Thermal expansionDigital photographyThermal expansionUniverse (mathematics)CollaborationismNobelpreis für PhysikDifferent (Kate Ryan album)Phase transitionSpacetimeLecture/Conference
Port scannerTime evolutionSoftware testingDependent and independent variablesFood energyDifferent (Kate Ryan album)Multiplication signOrder (biology)Category of beingShape (magazine)Form (programming)Thermal expansionGravitationElectric generatorUniverse (mathematics)Particle systemDark energyTotal S.A.Condition numberMachine visionSpacetimeProgram slicingLecture/Conference
Software testingCausalityComputer-generated imageryPairwise comparisonFundamental theorem of algebraGravitationPoint (geometry)Shape (magazine)Term (mathematics)TheoryBlackboard systemInteractive televisionSoftware testingSignal processingData compressionPhysicalismCollaborationismNichtlineares GleichungssystemImage resolutionScaling (geometry)1 (number)Enterprise architectureSpacetimeForcing (mathematics)View (database)
Product (business)State of matterPlanck constantInternet service providerSicCollaborationismMathematical analysisComputer simulationTheory of relativitySatelliteTheoryGravitationDark energyComputer animation
Router (computing)TheoryMoment (mathematics)TelecommunicationComputer animation
Planck constantSpecial unitary groupPoint (geometry)Lecture/Conference
RotationPlanck constantFrequencyTracing (software)Direction (geometry)RotationThermal radiationDifferent (Kate Ryan album)PlanningMereologyUniverse (mathematics)Physical systemCircleLine (geometry)
Content (media)Level (video gaming)Function (mathematics)Direction (geometry)Different (Kate Ryan album)Graph coloringThermal radiationImage resolutionEvoluteIdentity managementResultantCollaborationismComputer simulationUniverse (mathematics)Computer animation
MereologyUniverse (mathematics)Image resolutionParameter (computer programming)Degree (graph theory)Different (Kate Ryan album)Food energyDistanceThermal expansionLevel (video gaming)Mathematical analysisComputer animation
Process (computing)Mathematical analysisResultantView (database)Point (geometry)Source codeLibrary catalogObservational studyOrder (biology)Level (video gaming)JSONComputer animation
Metropolitan area networkFrequencyElectric dipole momentPlanck constantCluster samplingMathematical analysisAdditionData storage devicePixelMoment (mathematics)Web pageLevel (video gaming)Computer simulationFunctional (mathematics)InformationOpen sourceGroup actionElectronic data processingMappingSatelliteElectric generatorDifferent (Kate Ryan album)Power (physics)Multiplication signAngleState of matterProcess (computing)Scaling (geometry)Spectrum (functional analysis)Perturbation theorySocial classWordCodeStreaming mediaStudent's t-testReading (process)Formal languageSpherical harmonicsMathematical analysisRaw image formatPiFitness functionComputer animation
Computer simulationUniformer RaumNormed vector spaceMaxima and minimaMetropolitan area networkOrder (biology)Line (geometry)CurveCAN busInitial value problemPredictabilityWeb 2.0Different (Kate Ryan album)HypermediaAtomic numberParameter (computer programming)Dark energyThermal expansionComputer animationDiagram
Computer simulationOpticsLevel (video gaming)Lambda calculusOpen setWebsiteTerm (mathematics)Social classPhysical systemLinear mapCodeSoftwareInternet forumMetropolitan area networkParameter (computer programming)Length of staySimulationPower (physics)World Wide Web ConsortiumStandard deviationImage warpingFactory (trading post)Image resolutionArchaeological field surveyVarianceMusical ensembleBoom (sailing)Case moddingCloud computingIntelHigher-order logicElement (mathematics)Mach's principleAsynchronous Transfer ModeSound effectFrame problemCompilerSingle-precision floating-point formatCore dumpMatrix (mathematics)ComputerInterface (computing)Game theoryMathematical singularityBit rateElectronic visual displayFile formatoutputPointer (computer programming)Chi-squared distributionSineSpacetimeFlow separationParameter (computer programming)WebsiteCodeOpen sourceComputer animationDiagram
OpticsLevel (video gaming)WebsiteOpen setLambda calculusDistribution (mathematics)Posterior probabilityParameter (computer programming)Sample (statistics)ForceSimulationUniverse (mathematics)Markov chainElectronic mailing listTask (computing)CodeoutputFreewareMatrix (mathematics)Module (mathematics)AlgebraLinear mapPoint (geometry)Modul <Datentyp>Time evolutionDifferent (Kate Ryan album)Interface (computing)Sampling (music)Formal grammarSoftware developerSocial classGame theoryBlock (periodic table)Web pageInteractive televisionRepository (publishing)Mathematical optimizationGradientBitDataflowIdeal (ethics)ProgrammschleifeMereologyRead-only memoryMathematical analysisPlot (narrative)Planck constantInformationSigma-algebraStandard deviationMereologyDifferent (Kate Ryan album)Parameter (computer programming)Order (biology)CodeGravitationSoftware developerProjective planeSampling (statistics)CodeMultiplication signResultantFlow separationoutputMathematical analysisPosterior probabilityInterpreter (computing)SpacetimeQuicksortMoment (mathematics)Active contour modelNichtlineares GleichungssystemArchaeological field survey1 (number)Fraction (mathematics)Source codeTheoryFormal languageModul <Datentyp>ChainComputer simulationPlotterRevision controlStability theoryDialectCombinational logicElectric generatorModule (mathematics)Image resolutionInformationGeneral relativityLink (knot theory)Repository (publishing)Universe (mathematics)EvoluteData structureOpen sourcePredictabilitySocial classProduct (business)Thermal expansionGroup actionCategory of beingCorrespondence (mathematics)Forcing (mathematics)Insertion lossSimulationPosition operatorGenderInterface (computing)Element (mathematics)Point (geometry)Right angleMetreComputer configurationFrequencyArithmetic meanOpen setDark energyStandard deviationRow (database)Greatest elementService (economics)Computer animation
Polarization (waves)GravitationPolarization (waves)Point (geometry)Universe (mathematics)Gravitational waveLevel (video gaming)Source codeMultiplication signAdditionGroup actionMoment (mathematics)Slide ruleLecture/Conference
Surface of revolutionUniverse (mathematics)Moment (mathematics)Different (Kate Ryan album)Lattice (order)Surface of revolutionWeb 2.0PermanentUniverse (mathematics)Field (computer science)
Universe (mathematics)Expandierender GraphGravitationBitTheoryUniverse (mathematics)Computer animationDiagram
Universe (mathematics)Multiplication signPerturbation theoryInformationCodeHypermediaQuicksortSpacetimeEvoluteTime zoneComputer simulationInitial value problemDirection (geometry)Different (Kate Ryan album)Sign (mathematics)Open sourceDistanceGroup actionThermal expansionView (database)Population densityProcess (computing)Arithmetic meanThermal radiationSoftware testingEqualiser (mathematics)Scaling (geometry)Sound effectModul <Datentyp>Point cloudNoise (electronics)Parameter (computer programming)Likelihood functionNumberFormal languageRegulator geneWebsiteInstance (computer science)FraktalgeometriePhase transitionGoodness of fitInterior (topology)ProgrammschleifeModule (mathematics)Field (computer science)Condition numberLecture/Conference
Transcript: English(auto-generated)
Please welcome Valeria Petorino, and she will tell us about exploring the universe with Python. Please welcome Valeria. Good morning, everyone. Thanks a lot for being here. I would like to first thank the organisers, or live on
YouTube, for the opportunity to talk here. I will discuss how we can use Python in cosmology to unveil the universe and, in particular, the dark universe. I'm Valeria Petorino,
and I'm a physicist, so just let me briefly introduce myself. I'm a physicist. I work in astrophysics and cosmology, so the study of the understanding of the origin, the evolution, and the content of the universe. I work in particular for two
space missions. In fact, the lights were fantastically down before, for two space missions financed by ESA and the European Space Agency, and NASA. So the first one is Planck satellites that was launched in 2009, and we
released data last year for cosmology, and the other one is this beautiful one, Euclid space mission, that will be launched in 2020. I will tell you more about that afterwards. I've been also working a lot on
communication. I've been in charge of the communication, internal communication for the Euclid space mission and public outreach for two years. I'm also very much interested in data science. I've been working separately for a
healthcare project for a startup in London for some time for an IoT project, but that's a different story. But I mention that because I've recently also become ambassador for the S2DS program, so before we go to cosmology, let me also tell you about this program.
It's science to data science. It's the largest data science boot camp in Europe. It's a five-week program that happens twice per year, one virtually one in place in London, and it really aims at joining the academic community with data science experience.
The ambassador program in particular aims to build a network between scientists from academia and outside the data science community, and they support talks, also covering partly expenses if one wants to organise some event. I'll be moving actually to Paris in a few months,
and I'll probably be organised data science workshop next year, so if you're interested in taking part or be part of this community, please contact me or just look at the web for the ambassador program. Okay, so let's look at cosmology now, and let's first
understand a bit at which distances, which scales are we talking about. Well, human beings are more or less interested in this scale, and if you go down to smaller
scales, then you reach the interest fields for chemistry, for atomic physics, for nuclear physics, down to 10 to the minus 15 metres, and even lower scales, the scales of particle physics where the large collider at CERN is working, or at those, there is
very small distances detected that we heard yesterday from gravitational waves. But now I would like to bring you up in the other direction at very large distances beyond human beings, beyond Earth, beyond the sun, in the
domain of astrophysics and cosmology. So we start our journey across the cosmos from our planet, Earth, which is one of the planets in our solar system, this blue dot right there, and the whole solar
system is here in the edge of the spiral of our galaxy, which is the Milky Way. So here we are at about 10 to the 21st, 21 metres. But we want to go, and we can
actually go, we have the power to look much farther than that. In fact, this is a picture, I don't know if it's too bright to see anything, but that's a photo taken from the Hubble Space Telescope,
financed by NASA, in which every single point in this picture is a galaxy like our Milky Way. And we can go even, even farther, and for that I'd really like the light, if possible, to be even lower,
if there is anyone there before I start, because otherwise you won't see anything of the next slide. And also, I mean, it's a dark universe, so somehow,
okay, that's already great, that's already much better, thanks. So that's, again, a picture of all the galaxies around us, but we can go even farther, so if you imagine that you are somewhere in the centre,
let's say, of this video, and you go far away from these galaxies, then these are all galaxies which have been observed by the Sloan Digital Sky Survey, a collaboration that observed about one million galaxies, and they're just placed around in space
as they've observed them, and as you go farther and farther from our galaxy, you see that they don't really fill out the whole space, but they actually form a web, they form voids, places where there's no galaxies, and filaments, places where there's lots and lots and lots of galaxies,
and this is all really, I mean, observed by the Sloan Digital Sky Survey, so this is what is called the cosmic web. In addition, we know that the universe is expanding, this already since a long time,
so in the sense that really distance between galaxies, space itself in between galaxies is stretching, and for a long time, the expansion has decelerated, so just gone slower and slower due to gravity
that tries to pull things together and slow down this expansion, which is also what you see here, so there's like slowing down, and then suddenly, about five billions years ago, it started to accelerate,
these expansions started to accelerate faster and faster, and this was discovered only in 1998, it was a huge, huge surprise that got these three people Nobel Prize in Physics in 2011 for the discovery
of the acceleration of the universe, and right now, the universe is accelerating, is in this phase of accelerated expansion. Now since then, since 1998, there have been several experiments, and so a lot of data
coming from different experiments from ground in space, different collaborations, looking at different things, and they all seem to point out towards the same surprising picture of the universe, a universe which is mainly dark, so atoms, ordinary atoms, ordinary matter,
all human beings, and basically stars, they all account for at most five percent of the total energy budget in the universe. The rest is completely basically unknown,
and we know that it's partly for about 25 percent in the form of dark matter, so that's a form of matter that still feels gravity, and it's like the glue that forms galaxies, that keeps galaxies together, and even more mysteriously, about 70 percent of our
universe energy budget is in the form of dark energy. That's dark in the sense that it does not emit light, we haven't actually seen, detected the particle of dark energy yet, or dark matter yet, but we know
that it's responsible for this accelerated expansion of the universe. And so understanding 95 percent of the universe, as you notice, is like almost embarrassing, so it's the major challenge at the moment
and for the next generation of experiments. And this is a cosmic vision of really having the big picture, understanding again 95 percent of the energy that surrounds us, but it's also a big data challenge that joins a lot of different
communities together. So there is already a new generation of experiments, among which the next one to be launched again is the Euclid space mission, which are going to use different probes to scan
the sky, slice it, at different epochs in time. So they're going to observe, for example, the shapes of billions of galaxies at different epochs in time. And this is a huge challenge, it's a challenge
from the technological point of view, because you have to, of course, predict the technology and build a new technology to have, let's say, the resolution that allows you to discriminate among all the possible theoretical models that can explain dark energy,
to actually build a detector, to actually transfer the signal and compress it. So the whole signal processing challenge to understand, to reconstruct the shape of the galaxies and to compress the data
that comes from space, and to actually interpret it in terms of comparison with theoretical models, to finally all together test gravity and fundamental physics at very large scales, like we do at very
small scales, so testing forces, testing interactions at very large scales, like people do at the LHC at CERN, for example. And I would like to stress that this is not the work of a single astronomer, a single person that looks, I don't know,
writes down strange equations on the blackboard or looks at the telescope from somewhere. This is really an enterprise, in a way. This is work that involves huge, large collaborations.
So I'll tell you something about the two in which I'm in. The first one is Planck, and this is a collaboration of about 100 scientific institutes in Europe, in the US, and in Canada. It involves about 500 people,
and for that I've been leading the analysis that compares the data from the satellite to theoretical models that predict dark energy and theories beyond general relativity, so modified gravity.
The other mission, Euclid, is more than twice as big, so at the moment it includes 1,300 people from 120 labs, 13 European countries, plus US, NASA, and Berkeley labs. So for that, apart from working on the communication,
I'm in charge of the whole forecasting activity to determine a reliable pipeline that can tell us how well Euclid will perform in discriminating among different theories.
Okay, so let me tell you a bit, let's say, more in detail what we actually observed and how we actually analyzed the data, and of course where we used Python in it. And I'll do that in particular for Planck. Planck was launched in 2009,
and we collected terabytes of data. It was sent 1.5 million kilometers away from Earth, orbiting around the second Lagrangian point, somewhere on the opposite side with respect to the sun. And it scanned the entire sky twice per year.
So the spacecraft spins with one rotation per minute, and it traces circles in the sky, observing the radiation in all directions at different frequencies. So it contains two instruments,
one at low frequencies and one at high frequencies. And the high frequency part had to have a very complex cryogenic system that had to cool down the whole detector down to 0.1 Kelvin. So it was literally the coolest place in the universe for a while.
It observed in all directions, as you see, all the radiation. And what you see here is the emission also from our galactic plane along this line,
which actually for us is a background. We remove it. We don't want to see the galaxy, our galaxy, the light from our galaxy. What we want to see is something which is much more challenging, what we actually saw. It's something which is much more challenging, which is the light, the cosmic microwave background
that was emitted 13 billions years ago. And this is a map of it. This is one of the main results, outputs from the Planck collaboration, in which you see this is a microwave radiation, and the different colors correspond to different temperatures,
so tiny, tiny differences in temperature in this radiation. The main temperature is about 3 Kelvin, so it's very, very cold. That's why the detector had to be so cold, even colder than that. But what we are actually interested in
is this tiny, tiny differences in temperature when we look at different directions. So all these are like hot and cold spots when you look in different directions. And there we have such an amazing resolution
of this map that we can understand how this light travels down to us, and from there understand the evolution of the universe and reconstruct the content of the universe itself. So that's really sort of similar to what you would do a map of the temperature on Earth, on the top,
where you would go up to, I don't know, 40 degrees or even higher in Bilbao recently. Or, but just on the sky. So this is around 3 Kelvin, so minus 270 degree, centigrade.
And really you see tiny, tiny differences of one part of 10 to the fifth of something that was emitted 13 billions years ago. And that gives you a resolution on, say, on the parameters that describe your universe, on the amount of dark energy, dark matter, and the expansion of the universe
with a resolution of the percent level. So I think that's, I don't know, just almost astonishing. And most of the analysis is actually in the whole processing of the data, in trying to get rid of all the other sources,
all the individual point sources for which we have catalogs that we just remove. We remove the radio emission from the Milky Way. This Milky Way is really annoying, I'd say, for us. We remove all the dust emission again
from our lovely Milky Way, which is in itself, however, of course, of interest for other communities that, I mean, study that in particular, in order to unveil the cosmic microwave background. And this was a result that, I mean, you might have also seen this map.
It was kind of advertised in, basically, all newspapers on the front page. What we actually get, of course, it's not really the map. It's something terrible happened.
So what we get is just time ordered data, the beginning, that's an example of three minutes of raw data that we get from the satellite. And then this is, most of the analysis is really in processing this data. And for that we used several classes
all over the world, basically. This is, the main data processing center are in Italy and France, both for Planck and for Euclid. And they collect, basically, terabytes of data. And for the next generation of experiments,
we really expect also from radio telescopes to have about terabytes per minute of data that arrive. So there's all this information that comes from the satellite that arrives at the Mission Operations Center in Germany. And then it's transferred to Italy and France where there is the data processing center.
And then it's transferred to the whole community, basically, again, around the world in different institutes. And there's different groups that extract from those data, clean up all the data, and extract these maps. There are challenges between different groups
to understand which one performs better. We then extract from them, we project in spherical harmonics to identify, say, the dependence at different angles in which we are looking at. And for all this process, there's actually,
as was mentioned actually in the talk before, there's lots of different codes by different people written in different languages. So for the extraction, for example, of the maps, lots of them actually are in IDL and use ILPICS. Lots of which it's unfortunate in the sense it's not even open source.
There's lots of Python in it. There's lots of C, C++, Matlab, and yeah. From all of that, so from terabytes of data, we can extract the power spectrum. So really the data that's on the y-axis, you see again the temperature perturbations,
temperature differences at different scales. As a function of the spherical harmonics, so as a function of the angular scale. This is very large angular scales and this is very small, tiny angular scales. And then there is a whole process in which you have to try to compare this
with theoretical models and fit that somehow. And the fit that I'm showing you is exactly the one that corresponds to the pie that I showed you before. So the thing is if... In fact, I can probably show you this here
that you can find online. So depending on the amount of atoms, of dark matter, of dark energy that you put in it, you get different kind of predictions, different kind of curves. So for example, if you have 100% atoms,
only atoms, then you would get this kind of curve that, as you obviously see, does not fit the data. So in order to fit the data, you need to decrease the amount of atoms even more and decrease the amount of dark matter.
And as you see, all the predictions are changing. And there's other parameters on the expansions and when the reunification takes place and initial conditions. And finally, hopefully,
if you have only 70% of dark energy, then you can actually finally match the data. Now obviously, we don't do that in this way. We have to analyze the whole region of parameter space
and use several tools. There is a whole collection of tools which is available in the cosmology in the NASA website. I'll show you here. And all these codes are all open source. They're all available. You can all play with them.
There's several of them. So for the future missions, the Euclid in particular, the whole Sounds Ground segment,
and also all the forecasting activity that I lead have chosen Python as the recommended language. So most of it will be in Python. For the actual interpretation, at least for the interpretation of the Planck data, of course, you need to do simulations. We use Monte Carlo Markov chain to compare the data
with the whole parameter space of the theoretical models, of the predictions of the theoretical models. And we use Bayesian analysis with that. So we try to build chains that reconstruct the posterior distribution,
so the probability to have that model given those data. And we have several tools for that, but in particular, there's one that I want to mention because it's written in Python, and it's a code which is open source. It's called Monty Python.
It's a Monte Carlo code written in Python. You can find it on GitHub. There's documentation, and the main developers are Benjamin Odran and Julien Leskurs, plus many, many others. So this, for example, will also be used
for the forecasting activity in Euclid. Now all this requires to deal with complex data and also to combine data that come from different sources, from different experiments, and that sometimes look at different things,
like different parameters. You have to deal with several free parameters. The ones that describe your cosmology, so the amount of matter, the amount of dark energy, the expansion, and so on, order of 10 parameters per cosmological model, plus about 10 to 100 parameters
that describe the instrument and all the systematics involved in it. So we need to sample very efficiently in parameter space. There's different possible samplers that are used and also integrated in Monty Python. For a long time, people have used, and are still using, let's say, code,
which actually was written in Fortune 90, and that is, sort of, part of it is now phagocytated by Python, which is called CosmoMC. And Monty Python is a more recent version for the moment written in Python 2.
Of course, it guarantees that it's much more concise with respect to the previous code that was used. It allows to run with much more stable Monte Carlo chains for days, investigating parameter space,
and it also allows to have a much more moduled structure in which we have to understand that basically this has to interface with different codes that, for example, deal with the data from different experiments, or with different samplers in parameter space,
or with different codes that solve the actual equation that describes the universe from the Big Bang down to us. So all these modules are written sometimes also in different languages, and they're all integrated within Monty Python. So that's a sort of schema of the modularity
of this Monty Python part. That's the part here, for example, integrated here with class, which is a code in C that solves the whole evolution of the background. All the top comes from different data sources,
and then there's all different samplings here on the right-hand side. This is also, so Monty Python recently, so it's also in Binder, so if you go, for example, to the link on the top, you can also see part of the class,
the GitHub repository, transformed into IPython notebooks, and you can play with it, and it includes examples and repository with previous results. What is not yet optimal in Python,
at least for this project for us, is mainly that it's slow for what we need to do, especially in some parts. The previous code that was written in Fortran has a huge, let's say,
is integrated with the OpenMPI, and one can run simultaneously lots of different chains, and also run everything on a grid, so that basically you investigate very quickly a large fraction of parameter space. This is not yet integrated into the Monty Python part, and Monty Python uses MPI for PI,
but, of course, it would be much useful, much more useful to have something like OpenMPI, and so, in fact, if you have any ideas on how to improve that, we need input from the data science community.
Python is used instead a lot in all codes for the analysis of the chains and for plotting all the posterior credible regions, so basically all the regions in parameter space that identify how big can each parameter be.
So that's an example. That's another example of plots that we usually look at, of like 3D sort of plots produced with Python, and also in combining different experiments. So this is, for example, one of the results that I had
when comparing data from Planck with, say, general relativity. So if you see here, this cross corresponds to the model represented by standard general relativity, and you see that while Planck, the blue contours,
roughly it's still fine with general relativity. It agrees with general relativity. There is some tension when you combine Planck, so information from the early universe, with information from the late time universe, so with other probes, with other probes from surveys of galaxies.
So basically you just have to look at the red contours. These combine different data sources from different experiments that combine information from the early time and information from the late time universe, and their combination prefers theories which modify gravity with respect to general relativity.
Of course, this is only at 3.5 sigma, so it's not what you would call a detection, but it's something that will be, of course, of much interest,
that we will be able to detect with the future generation of experiments that will have much higher resolution. In addition, we can produce maps like that. So that's a full-scale map of the polarized emission from the dust of the Milky Way.
It looks like some impressionist portrait, but it's really the polarization of the light related from the dust of the Milky Way, and that's important because this is, in a way, it's a background. So the problem is that, the point is that gravitational waves
that we heard yesterday can also have an impact on the polarization of the CMB, so on the polarization on this light that gives us a picture of the early universe. And this undirect detection, so a detection of gravitational waves through the CMB,
through this microwave radiation, has not happened yet. We haven't seen that yet, but for the first time, we have a full-sky map of other sources that can mimic the same kind of signal. And so, in the next months,
basically there will be also new data from ground, from balloons, looking at the polarization of the CMB, trying to understand them, again to detect the gravitational waves also in this way. So there is really a revolution that is coming in the next five to ten years
to unveil the dark universe. It's a huge challenge. It's a technological challenge. It's a big data challenge. We have, again, terabytes of data coming per day at the moment and per minute in future radio telescopes.
There's been a lot of investments already from national funding agencies, from ESA and NASA, to understand this problem. And again, it's a big data challenge. And we want to join different communities, of course, to get the best scientific return.
So it's not just about one person working somewhere in some office. It's about joining expertise because this will actually, in a way, determine, and that's a bit drastic, the future of our universe, whether everything will be destroyed
or whether we will expand forever or whether we will just collapse again through gravity. And this depends on how much of this there is in the universe. So overall, we really want to be sure that we look at the big picture and we join expertise coming from different fields
to understand exactly what are we actually observing. So, yeah. Thank you. That's fine.
So, questions? First of all, thank you. It's an excellent talk. Really, really exciting. Thank you. The background radiation, the picture you put up, is not uniform. Why is that?
And to me, it looks like clouds in the sky. Are you fractal? Does that mean anything? Yeah. Okay. So, that's a very, very good question, actually. Let me show you just the map.
Yeah. Yeah. Exactly. So, if you look in different directions, it's mainly isotropic and homogeneous in the sense that it has a mean temperature more or less everywhere at three kelvin. But really, what we are interested in
is exactly in anisotropy, in differences in temperature. This is what is mapped here. Differences in tiny, tiny differences in temperature with respect to the mean temperature. And these tiny differences in temperature are due to the fact that in the very early universe,
there were very tiny density, say, perturbations. So, very tiny differences in space due to, let's say, they were stretched. It's really about the initial conditions of the universe.
Just after the Big Bang, there was a phase of very fast expansion, which is called inflation, in which very tiny differences in densities. So, in which some matter was more in some place and less in some other place were stretched to microscopic scale.
And this reflects, this affects the temperature of this radiation that was emitted at that time. So, what we really see here is really a picture of the initial conditions of the universe. It's a picture of the universe as it was 13 billions years ago. That's the farthest that we can go up to now.
It's, yeah. That's surprising, isn't it? Sorry. That's surprising because, how can I put this? We would expect things to be uniform unless we had additional information. So, in some science, for example, we use probability or whatever. We assume equality or homogeneity, isotropy,
if we don't know. So, I guess, does this say that people are working to find out why there were differences in matter density or whatever it is? Yeah. It seems surprising to me. Yeah. Yeah. I mean, usually when you solve the evolution of the universe,
you assume homogeneity and isotropy. And then you treat this as linear perturbation around a mean universe, homogeneous and isotropic background. And, yeah. I think, yeah.
Pretty amazing, I think. Okay. One question. One question now. You mentioned before that you were looking for ways to accelerate some inner loops and computation and stuff. Have you tried Numba, for instance? Any other solutions? I don't think it's... Like Cyton or... Cyton, yes.
Cyton, yes. It's already used, for example, to wrap... So, modules in Monty Python, which we deal with data, we deal with the likelihoods, with the codes which actually solve the evolution of the universe. So, that's already used a lot.
The main problem is that the region of parameter space, it's really huge, especially if you want to test models beyond general relativity, which are still allowed, absolutely allowed by the data. And so, it's really the process of sampling that I think should be somehow...
Yeah, become faster. Okay, thank you. If you have more questions, I invite you to contact Valeria outside directly. Thank you. Thank you very much. Thank you.