Photogrammetric processing and fruition of products in open-source environment applied to the case study of the Archaeological Park of Pompeii
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 351 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/68891 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produktionsjahr | 2022 |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
| |
Schlagwörter |
00:00
Prozess <Informatik>Chi-Quadrat-VerteilungBaumechanikOpen SourceProgrammierumgebungProdukt <Mathematik>Kontextbezogenes SystemAutomatische HandlungsplanungSondierungGeoinformatikPerspektiveInhalt <Mathematik>TabelleDatenfeldWeb SiteAggregatzustandFlächeninhaltDimensionsanalyseMereologieFokalpunktKontrollstrukturVollständigkeitSichtenkonzeptDifferenteOperations ResearchExploitErneuerungstheoriePortscannerGraphiktablettVertikaleWorkstation <Musikinstrument>ElementargeometrieSpezialrechnerMinimumTrajektorie <Kinematik>ATMDienst <Informatik>AutorisierungTabelleSprachsyntheseStatistische HypotheseGeoinformatikSondierungOrtsoperatorRechenschieberAggregatzustandInnerer PunktKreisflächeWeb SiteProdukt <Mathematik>FokalpunktCoxeter-GruppeDifferenteMereologieInhalt <Mathematik>Auflösung <Mathematik>Digitale PhotographieAutomatische HandlungsplanungFlächeninhaltGerichteter GraphSplineNichtlinearer OperatorQuadratzahlWellenlehreSchnittmengeGamecontrollerPaarvergleichPunktVollständigkeitWorkstation <Musikinstrument>Open SourceGraphiktablettTermSichtenkonzeptDatenverarbeitungKontextbezogenes SystemDatenstrukturRahmenproblemAlgorithmische ProgrammierspracheZoomKomplexe EbenePortscannerDiophantische GeometrieProgrammierumgebungATMProzess <Informatik>MultiplikationsoperatorVertikaleGenerator <Informatik>Total <Mathematik>PerspektiveLogistische VerteilungSoftwareMAPPunktwolkeQuaderQuick-SortGrenzschichtablösungDimensionsanalyseRelativitätstheorieElementargeometrieComputeranimation
07:31
Web SiteFlächeninhaltProgrammierumgebungDatenverarbeitungHilfesystemComputeranimation
08:06
SondierungSpezialrechnerProzess <Informatik>GeräuschMultiplikationMaßstabSchätzungMatchingParametersystemDichte <Stochastik>HomologieAuswahlaxiomPunktBenutzeroberflächeOffene MengeProdukt <Mathematik>ElementargeometriePunktwolkeAlgorithmusMereologieNatürliche ZahlInnerer PunktTranslation <Mathematik>KoordinatenEin-AusgabeElektronische PublikationZentrische StreckungFolge <Mathematik>DigitalisierungRechter WinkelResiduumAbstandMittelwertKlasse <Mathematik>Gauss <Rechenmaschine>Gauß-FunktionRichtungSoftwaretestFlächeninhaltSchreiben <Datenverarbeitung>GeradeCodeGraphische BenutzeroberflächeOpen SourceMatrizenrechnungProgrammbibliothekDatenverwaltungFunktion <Mathematik>ExploitApproximationObjekt <Kategorie>PolygonnetzSelbstrepräsentationLokales MinimumPunktOrtsoperatorSchätzfunktionSoftwaretestProzess <Informatik>Open SourceSoftwareOrientierung <Mathematik>Diophantische GeometriePunktwolkeAlgorithmusCodeElektronische PublikationRichtungMereologieApproximationsalgorithmusOffene MengeFunktion <Mathematik>SplineGraphische BenutzeroberflächeCASE <Informatik>Zentrische StreckungTranslation <Mathematik>BilddatenbankAusnahmebehandlungProdukt <Mathematik>Einfache GenauigkeitResultantePolygonnetzURLFolge <Mathematik>GeradeWahrscheinlichkeitsverteilungObjekt <Kategorie>Sampler <Musikinstrument>Auflösung <Mathematik>Ausreißer <Statistik>EbeneZentralisatorComputerElementargeometrieParametersystemDigitale PhotographieStichprobenumfangMatchingLokales MinimumBeobachtungsstudieFormation <Mathematik>MultiplikationsoperatorPhysikalisches SystemGenerator <Informatik>SelbstrepräsentationDimensionsanalyseMaschinenspracheRauschenMatrizenrechnungProgrammbibliothekMeterKoordinatenDichte <Stochastik>Computeranimation
15:01
IterationPunktEbeneProzess <Informatik>PolygonnetzPunktwolkeSpezialrechnerPixelMAPAxonometrieGeradeMultikollinearitätAnpassung <Mathematik>Sampling <Musik>Produkt <Mathematik>MultigraphProgrammierumgebungWahrscheinlichkeitsverteilungInklusion <Mathematik>PflichtenheftMeterMaß <Mathematik>KoordinatenShape <Informatik>PolygonElektronische PublikationTotal <Mathematik>DatenbankDickeEllipsoidUmfangQuadratzahlFlächeninhaltErhaltungssatzAggregatzustandKonditionszahlÜberlagerung <Mathematik>Klasse <Mathematik>Funktion <Mathematik>SchnittmengeMittelwertZellularer AutomatLokales MinimumVertikaleEbener GraphPerspektiveGeoinformatikStrategisches SpielSondierungMinimalgradFreewareOpen SourcePaarvergleichShape <Informatik>Prozess <Informatik>RahmenproblemKartesische KoordinatenComputerarchitekturPunktSondierungElektronische PublikationAlgorithmische ProgrammierspracheProjektive EbeneEbeneElementargeometrieWürfelWeb SiteProdukt <Mathematik>FlächeninhaltRelativitätstheoriePolygonDickeGeradeTypentheorieÜberwachtes LernenBenutzeroberflächeGruppenoperationDifferenteQuadratzahlQuick-SortFisher-InformationLeckWellenpaketUmfangGRASS <Programm>PunktwolkeStandardabweichungLeistungsbewertungAggregatzustandPolygonnetzEinflussgrößeMultikollinearitätVertikaleDatenstrukturDatenbankSichtenkonzeptTabellePixelGraphfärbungErhaltungssatzAuflösung <Mathematik>ProgrammierumgebungCASE <Informatik>GeoinformatikAttributierte GrammatikEbener GraphWahrscheinlichkeitsverteilungÜberlagerung <Mathematik>Digitale PhotographieMereologiePerspektiveKreisbewegungMAPStichprobenumfangMultifunktionBeamerUnified Threat ManagementComputerComputeranimationXML
21:55
Web SiteDatenbankFisher-InformationProgrammierumgebungErhaltungssatzAggregatzustandStandardabweichungLeistungsbewertungEbeneCAN-BusGeoinformatikPolygonGeradePunktPerspektiveStabErneuerungstheorieIntegritätsbereichChi-Quadrat-VerteilungBaumechanikComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:00
Thank you very much. Welcome everybody to the last speech. I just want to acknowledge the other authors of this work, mainly Eugenio Berino, which is, most of this work was developed during his master thesis. Just a few very quick introductions of the table of contents,
00:20
so first of all I will give you some hints about the context of where I will work and the introduction to this. Then I will show you some details about the planning and the execution of the integrated geometric survey and the post-processing of the deriving data,
00:40
mainly related to the photogrammetric data processing and the orthophoto-generation. Then I can show you a procedure for the fruition of these products in a GIS environment and at the end of the presentation I'm going to draw some conclusions and future perspectives of the work. So first of all we performed a geometric survey campaign of Domus 5 in
01:08
Sula 14 region 7 of Pompeii archaeological park located near Naples in Italy. We collected a very high amount of images and then we processed them through photogrammetry
01:22
to obtain a 3D model of the site and then the orthophoto. Finally we make these products available through a GIS environment. All of these operations were conducted using free and open source software, in particular we used MicMac, CloudCompare and QGIS,
01:45
plus a non-open source software which is called Mago, which was developed by Sara Gagliolo, a colleague of Geomatics Lab in the University of Genova, which is able to produce high-resolution orthophoto. This work is born thanks to a synergy cooperation between different expertise,
02:08
in particular related to geomatics, archaeology and structural engineering, with different aims. So the archaeological point of view is more interested in the investigation
02:20
of the destination of use of the several rooms in the site. The structural engineering expertise is more interested in analyzing the state of the art in terms of structural safety also and to study the retrofitting interventions. And from our site, the geomatics site, of
02:44
we want to acquire a very accurate survey to produce very nice products to achieve the previous aims of the other two expertise. Here is the context we are looking at. So the dimension of the district area is about 60 per 35 meter square. The complex is formed by
03:06
three domus and 12 shops. It is located in the western part of Pompeii overlooking Via della Bondanza, which is one of the most important arteries of the site. The focus is in domus 5.
03:23
You can see here a zoom on this domus, which is also known as the house of the Queen of England, which is located again in the Insula 14, region 7. It is composed by 27 rooms. I just advise you that we are just focusing on one single room just for brevity.
03:43
Related to the survey planning, we use several and different geomatics techniques to have a check on the quality of the deriving product, of course, to exploit also their complementary features, so not to have holes in the survey in the products, to optimize also the logistics
04:05
and the timings of the survey operation because the site was open during our survey, so we have to also deal with this problem, and to have also the survey, which is
04:20
from a general view to an increasingly detailed one, so to produce a sort of nested survey zooming in the details of the site. This guarantees the completeness and also the control of the survey. Related to the survey settings, the survey was conducted about two years ago.
04:43
We used several techniques, as I have already mentioned. In particular, we use terrestrial photogrammetry, aerial photogrammetry, and laser scanning. Here you can see the settings, so the red box visualized the area where we performed an
05:01
aerial WAV survey at 40 meters above ground level. Then in the green square, we had an aerial and tilted survey with UAV again at 15 meters of height. Then in this smaller area, the laser scanning and terrestrial photogrammetry, this last one,
05:22
which was mainly related to the vertical walls, the frescoes on the walls, and is highlighted in this circle, is the room which is studied in more details. Again, also in this slide, you can see also the position of ground control points and checkpoints that were
05:41
used to put all the surveys in the same reference frame, and also the position of the takeoff and landing pad of the UAV. The employed techniques, again, we use WAV photogrammetry to a DJI Mavic 2 Pro to have a general overview of the site. Then terrestrial photogrammetry for
06:03
the vertical walls of Domus 5 through a camera, the Canon EOS 40D camera. The terrestrial laser scanner was performed to survey the interiors of Domus 5 interiors and to
06:24
collect the position of ground control points and checkpoints, we use both the GNSS in RTK mode and the total station. Here you have some very interesting details on the several techniques we use. I just want to underline the data set, which is very, very huge. We still not have
06:46
finished to process it, so it's a very, very huge work. Just to cite some numbers, for the UAV photogrammetry, we have more than 1,000 images. For the terrestrial photogrammetry, more than 7,000 images. 26 scans of terrestrial laser scanner and 25 collected ground control
07:07
points or checkpoints through GNSS and total station. Related to timings also, we have just for collecting the images, we have 2 hours for the UAV photogrammetry, 16 hours
07:22
for the terrestrial photogrammetry, 8 hours for the scanning and 16 hours for the several stations of GNSS and total station. Here you can find some nice pictures that can help you to understand in which very nice environment we worked. We have the privilege
07:42
to work when also where the site was closed, so very early in the morning and very late in the evening. Some of the areas were open to the public where we were surveying the site, so we also have to face these, the people that were inside the site and not disturb their visit
08:06
so much. Regarding the photogrammetric data processing thing, we use the MicMac, which is a very well-known open source software, which has very nice advantages related to
08:20
the rigorousness, maybe mainly in the external orientation and internal orientation parameter estimation and in the death matching algorithm and also gives the user the possibility to choose the homologous point search criteria. It has also some disadvantages of course, so for example no graphical interface and the user should be quite experienced to make it work.
08:48
The product we obtained, the process we obtained is a multi-scale multi-resolution approach which is able also to minimize the outliers and also the noise of the generated point clouds.
09:02
The workflow is here listed very quickly, so we extract the tie point, we estimate the camera positions and we generate the 3D point cloud through a dense match algorithm. For our case study, we used the following processing parameters, so we limited
09:22
the search of matches into 20 adjacent images because we have a strip geometry. We made some tests to understand which is the best timings and the best number of images to be processed together and we found that the best images number is 500 images which took about
09:48
24 hours of processing and we choose to have 100 common images between two adjacent blocks to get the blocks together once they are processed and we consider just one room
10:05
just for this example. Here you can see two blocks which are formed by 500 images and 500 images and this part is the common part formed by 100 images. The scaling and the referencing of the obtained point cloud was done by the ground control
10:25
points coordinates. In this case, the ground control points are coming from the natural points which are collected in the laser scanner point cloud and which coordinates are extracted using the open source software cloud compare. Then we use this unpronounceable command of mic mac
10:46
to collimate this point in the images. To make this command work, we need a sequence of images where the points are digitized and a txtv file to listing their coordinates.
11:01
As an output, we obtain an XML file which contains the points and the coordinates and then we can use this output to apply the roto-translation and the scaling to the entire model. Here you can find the location of the chosen point. We choose for example 15 points. You can
11:22
see here in three different rooms also to give a quite robust distribution of the points. These are the points in the laser scanner point cloud and these are the points, the same points of course in the photogrammetric point cloud. We check of course the results we obtained comparing the two point clouds. So we obtain
11:45
about one or two centimeters in the single ground control points except for the ground control points number six that should be removed because of a very high residual with respect to the others. We aligned the point clouds through cloud compare as I already said
12:03
and we also perform a distance computation again in cloud compare through the m3c2 algorithm that gives us the senior distance between the two point clouds the laser scanner one and the photogrammetric one. To compare the two point clouds we use this algorithm
12:23
as a reference cloud we choose the laser scanner one and with a normal direction horizontally oriented and we perform this test in a portion of the central world fresco. Here you can see the photogrammetric point cloud which is spacing of four millimeters the
12:44
laser scanner one which is spacing of one millimeters and the senior distance. Here you can find the distribution of the distances in the points and we obtain an average distance of about five plus or minus five millimeters so we are quite happy of this result.
13:06
Concerning the photo generation we use this software which is developed by my colleague Sara Gagliolo during her phd course. I just want to acknowledge that she is the winner of the Autech prize in 2022 for this work so congratulations to Sara. The code is more
13:28
than 3 000 lines of code in C++ it has a graphical user interface very simple realized in QT and he also exploits the open source library OpenCV mainly related to the matrices and images
13:45
management. The main feature of Mago is that it is able to overcome the approximation that typically is introduced by a mesh as a representation of the object so we can produce through Mago high resolution or two photos with the maximum resolution equal to the ground
14:05
simple distance. The workflow is very simple you can find it also in some references you can find in the paper so the first step is the definition of the orthophoto plane which is parallel to the whole plane then the acquisition of internal and external
14:24
orientation parameter of the image you want to produce the orthophoto, the definition of the orthophoto dimension and resolution and the automatic definition of ancillary reference system which is useful to make also to the user understand the position and the visibility of the
14:46
points. Then there are other two steps the first one is an interactive automatic process that is able to determine the best plane given by three points that is defined by the intersection
15:02
of the collinearity rays and the point cloud and then the procedure automatically generates a mesh directly from the point cloud so you don't have any further simplification or resampling but you just build an adaptive mesh at the highest possible resolution. Finally the color of each
15:26
pixel is projected into the image into the orthophoto map. Here you can find the one of the latest update of Mago which was updated to produce all sorts of photos for non-complainer planes
15:43
so you when you have walls that are forming an edge so this is done by introducing a rotation to place the two walls along the same the same plane so you can see for example a perspective view of a wall here and another wall here which they form an edge here you have the
16:04
orthophoto of the first of the first wall on the second wall and then you can put all together just enrolling them. Regarding the fruition of this product we created a QGIS project to include both the planimetric view and the
16:22
ultimate distribution of the data so we in some way created a 2D plus one GIS environment. Here you can see for example the 3D perspective view of three walls of a room and then we develop this cube into three planes where we have an x-axis
16:41
okay related to this to each orthophoto and then a y-axis that is the z-axis in the 3D environment. This procedure was realized by through a master slave architecture in QGIS so we are able to manage two different reference frames the first one which is the traditional
17:04
planimetric one where you can visualize the xy coordinates and then one related to the vertical plane of the walls so you can display the orthophoto and also have some information about the e-artimetric data. The master project is dedicated to the xy plane
17:26
and contains some polylines representing the in this case the perimeter of the almost five room walls. Each slave project is dedicated to a specific walls as in it a corresponding orthophoto projected in a xyz plane and it is
17:46
connected to the master project through a QGIS action. In reality we produce two QGIS actions the first one is very simple so you click on the on the borders of the of the room
18:03
and you open the corresponding orthophoto and then to this instruction you can open the slave project so you just have to simply insert the a window action type connect the path to the QGIS x and the path to the project folder and then we produce the
18:24
column name project which is the project you want to point to to be open. Each slave project contains the orthophoto of the walls of course and three shape files
18:40
one point one line and one polygon shape file. These three shape files are set in a way that the attribute table is automatically updated once you insert a new geometry so you have a very quick example of the slave project related to a wall
19:06
and then some details of the table so the table is updated to have the image coordinates of the shape file and also the real world coordinates so the etref
19:21
utm33 x y and z of the clicked point on the orthophoto through this relation you can see them in the in the paper so this is just for for the for just an example the same for the length the line shape file so we automatically create a line and so we can compute
19:45
its length automatically and the same for the polygon shape file so we can compute the area and the perimeter of this shape file automatically. We also perform the
20:01
supervised classification of the wall based on the state of conservation using the very well-known eugene sig and igmax leak grass commons so in the table you can see the area cover in both in a percentage and in square meters the areas the training areas are user
20:24
defined but you can of course obtain this kind of classification and finally we verify the planarity of the wall again by producing a dsm of a specific part of the wall and just computing the distance between this produced dsm and a vertical plane so we obtain
20:45
that there is a sort of deviation which can be may be related to a not vertical reference frame we have to deepen this aspect again so just for concluding very quickly we
21:01
showed the contribution of geomatics to archaeology by this processing and survey techniques and innovative approach we performed a nested survey and also nested fruition of products in some way through this master slave architecture which is able to give to the user
21:23
the possibility of making measurements on orthophoto and also visualize them we very quickly see the classification orthophoto and the evaluation of the deviation of the wall and the main point in my opinion is that a structure like this is very useful for the
21:43
realization of a database of the entire site and also it can be used by non-expert users in geomatics which will be very interesting also for archaeologists for example. I thank you very much and I am available for any questions.