We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Photogrammetric processing and fruition of products in open-source environment applied to the case study of the Archaeological Park of Pompeii

00:00

Formale Metadaten

Titel
Photogrammetric processing and fruition of products in open-source environment applied to the case study of the Archaeological Park of Pompeii
Serientitel
Anzahl der Teile
351
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produktionsjahr2022

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Photogrammetric processing and fruition of products in open-source environment applied to the case study of the Archaeological Park of Pompeii The geomatic strategy for the survey campaign, data processing and product fruition in an archaeological context is presented and discussed. The case study is the Domus V situated in the Archaeological Park of Pompeii (Regio VII, Insula 14), which was surveyed in September 2020 by the Geomatics Laboratory of Genoa University in collaboration with the archaeologist group of the same University, under the ministerial concession DG 553 Class 34.31.07/246.7 of 26 January 2016 and its renewal on 9 April 2019 (34.31.07/3.4.7/2018). The survey campaign involved the following integrated geomatic techniques: - UAV photogrammetry, performed with DJI Mavic 2 Pro. The shooting geometry was nadiral with two different altitudes of 40 m and 15 m. An additional survey with a tilting angle of 45° at a flight altitude of 15 m was performed along concentric paths around the site. The UAV dataset is composed of 1400 images. The photogrammetric surveys are framed thanks to temporary Ground Control Points (GCPs), surveyed with GNSS in Network Real Time Kinematic (NRTK) positioning strategy. - Terrestrial photogrammetry, 7000 images of the internal vertical walls were taken with a Canon Eos 40D camera at a shooting distance of about 2 m following a bottom-to-top trajectory. - Terrestrial laser scanning, using the Z+F 5006h phase difference instrument. The integrated survey allowed to move from a general view of the entire site to an increasingly detailed one, mainly aimed at the vertical walls, thanks to the global framing provided by the UAV survey. The UAV and terrestrial photogrammetry campaigns were processed through the open-source software MicMac [1] to create the dense point clouds, and CloudCompare [2] to align the different blocks. MicMac was chosen for its open-sourceness and its rigorousness in the photogrammetric processing, both related to the estimation of the external/internal orientation parameters and the dense matching to obtain the 3D point clouds from the images, that is based on a multi-scale, multi-resolution pyramidal approach that minimizes the outliers and the noise. Due to the not linear computational time in respect of the number of images, the MicMac processing was split in blocks of 500 images each (about 24 hours of processing time), with 100 overlapping images between two consecutive blocks, to align them through a point-to-point strategy. The obtained 3D point cloud was oriented and scaled using 15 natural points found on the terrestrial laser scanner point cloud, obtaining deviations on points positions ranging between 1 and 2 cm. The quality of the alignment was tested computing the distance between the laser scanner and the photogrammetric point clouds using CloudCompare M3C2 algorithm [3] on a representative area of 1.60 m × 2.25 m of the fresco on the central wall of the surveyed room, obtaining distances of ± 5 mm orthogonally to the wall. Moreover, the software MAGO [4], developed in C++ environment within the Geomatics Laboratory, was used to produce high-resolution orthophotos of vertical walls. MAGO exploits a step-by-step self-adaptive mesh that fits the dense point clouds considering a triangular plane area, where the image pixel is projected at its original resolution via the collinearity equations. The needed inputs are the image(s) to be orthorectified, the external and internal orientation parameters, the user-defined orthophoto plane and the output orthophoto resolution. MAGO was recently updated to generate orthophotos of non-coplanar adjacent walls, i.e., forming an edge between them, through a rotation so that the two walls are in a continuous common plane. The orthophotos were made accessible and viewable via a QGIS [5] project built so to manage two different reference frames, i.e, the traditional planimetric plane (X,Y) and the vertical plane of the walls (X-Y,Z), where the X-Y represent the planimetric coordinates along the wall direction. This allows to introduce the third dimension in the typical GIS representation, thus realizing a 3D GIS environment. The QGIS project is organized with a “master-slave” architecture, where the master project is dedicated to the (X,Y) plane and reports the vectorial geometries (lines) representing the perimeter of the walls, whereas a different slave project is dedicated to each specific wall with the corresponding orthophoto in a (X-Y,Z) plane. Each slave project is connected to the master thanks to a QGIS action that opens it when clicking on the corresponding wall in the master project. In each sub-project, the orthophoto of the wall is displayed together with three default shapefiles: point, line and polygon shapefile, respectively. The attribute tables of the three shapefiles are set to automatically be updated with the following information once the user introduces a new geometry: - point shapefile: the image coordinates (x, y) in pixel units and in the corresponding object coordinates (E, N, Z), where E and N represent the east and north coordinates in ETRF2000-2008.0/UTM33N reference frame and Z is the height of the point on the wall; - line shapefile: length of the drawn line in meters; - polygon shapefile: length of the perimeter and polygon surface, in meters and square meters, respectively.
Schlagwörter
Prozess <Informatik>Chi-Quadrat-VerteilungBaumechanikOpen SourceProgrammierumgebungProdukt <Mathematik>Kontextbezogenes SystemAutomatische HandlungsplanungSondierungGeoinformatikPerspektiveInhalt <Mathematik>TabelleDatenfeldWeb SiteAggregatzustandFlächeninhaltDimensionsanalyseMereologieFokalpunktKontrollstrukturVollständigkeitSichtenkonzeptDifferenteOperations ResearchExploitErneuerungstheoriePortscannerGraphiktablettVertikaleWorkstation <Musikinstrument>ElementargeometrieSpezialrechnerMinimumTrajektorie <Kinematik>ATMDienst <Informatik>AutorisierungTabelleSprachsyntheseStatistische HypotheseGeoinformatikSondierungOrtsoperatorRechenschieberAggregatzustandInnerer PunktKreisflächeWeb SiteProdukt <Mathematik>FokalpunktCoxeter-GruppeDifferenteMereologieInhalt <Mathematik>Auflösung <Mathematik>Digitale PhotographieAutomatische HandlungsplanungFlächeninhaltGerichteter GraphSplineNichtlinearer OperatorQuadratzahlWellenlehreSchnittmengeGamecontrollerPaarvergleichPunktVollständigkeitWorkstation <Musikinstrument>Open SourceGraphiktablettTermSichtenkonzeptDatenverarbeitungKontextbezogenes SystemDatenstrukturRahmenproblemAlgorithmische ProgrammierspracheZoomKomplexe EbenePortscannerDiophantische GeometrieProgrammierumgebungATMProzess <Informatik>MultiplikationsoperatorVertikaleGenerator <Informatik>Total <Mathematik>PerspektiveLogistische VerteilungSoftwareMAPPunktwolkeQuaderQuick-SortGrenzschichtablösungDimensionsanalyseRelativitätstheorieElementargeometrieComputeranimation
Web SiteFlächeninhaltProgrammierumgebungDatenverarbeitungHilfesystemComputeranimation
SondierungSpezialrechnerProzess <Informatik>GeräuschMultiplikationMaßstabSchätzungMatchingParametersystemDichte <Stochastik>HomologieAuswahlaxiomPunktBenutzeroberflächeOffene MengeProdukt <Mathematik>ElementargeometriePunktwolkeAlgorithmusMereologieNatürliche ZahlInnerer PunktTranslation <Mathematik>KoordinatenEin-AusgabeElektronische PublikationZentrische StreckungFolge <Mathematik>DigitalisierungRechter WinkelResiduumAbstandMittelwertKlasse <Mathematik>Gauss <Rechenmaschine>Gauß-FunktionRichtungSoftwaretestFlächeninhaltSchreiben <Datenverarbeitung>GeradeCodeGraphische BenutzeroberflächeOpen SourceMatrizenrechnungProgrammbibliothekDatenverwaltungFunktion <Mathematik>ExploitApproximationObjekt <Kategorie>PolygonnetzSelbstrepräsentationLokales MinimumPunktOrtsoperatorSchätzfunktionSoftwaretestProzess <Informatik>Open SourceSoftwareOrientierung <Mathematik>Diophantische GeometriePunktwolkeAlgorithmusCodeElektronische PublikationRichtungMereologieApproximationsalgorithmusOffene MengeFunktion <Mathematik>SplineGraphische BenutzeroberflächeCASE <Informatik>Zentrische StreckungTranslation <Mathematik>BilddatenbankAusnahmebehandlungProdukt <Mathematik>Einfache GenauigkeitResultantePolygonnetzURLFolge <Mathematik>GeradeWahrscheinlichkeitsverteilungObjekt <Kategorie>Sampler <Musikinstrument>Auflösung <Mathematik>Ausreißer <Statistik>EbeneZentralisatorComputerElementargeometrieParametersystemDigitale PhotographieStichprobenumfangMatchingLokales MinimumBeobachtungsstudieFormation <Mathematik>MultiplikationsoperatorPhysikalisches SystemGenerator <Informatik>SelbstrepräsentationDimensionsanalyseMaschinenspracheRauschenMatrizenrechnungProgrammbibliothekMeterKoordinatenDichte <Stochastik>Computeranimation
IterationPunktEbeneProzess <Informatik>PolygonnetzPunktwolkeSpezialrechnerPixelMAPAxonometrieGeradeMultikollinearitätAnpassung <Mathematik>Sampling <Musik>Produkt <Mathematik>MultigraphProgrammierumgebungWahrscheinlichkeitsverteilungInklusion <Mathematik>PflichtenheftMeterMaß <Mathematik>KoordinatenShape <Informatik>PolygonElektronische PublikationTotal <Mathematik>DatenbankDickeEllipsoidUmfangQuadratzahlFlächeninhaltErhaltungssatzAggregatzustandKonditionszahlÜberlagerung <Mathematik>Klasse <Mathematik>Funktion <Mathematik>SchnittmengeMittelwertZellularer AutomatLokales MinimumVertikaleEbener GraphPerspektiveGeoinformatikStrategisches SpielSondierungMinimalgradFreewareOpen SourcePaarvergleichShape <Informatik>Prozess <Informatik>RahmenproblemKartesische KoordinatenComputerarchitekturPunktSondierungElektronische PublikationAlgorithmische ProgrammierspracheProjektive EbeneEbeneElementargeometrieWürfelWeb SiteProdukt <Mathematik>FlächeninhaltRelativitätstheoriePolygonDickeGeradeTypentheorieÜberwachtes LernenBenutzeroberflächeGruppenoperationDifferenteQuadratzahlQuick-SortFisher-InformationLeckWellenpaketUmfangGRASS <Programm>PunktwolkeStandardabweichungLeistungsbewertungAggregatzustandPolygonnetzEinflussgrößeMultikollinearitätVertikaleDatenstrukturDatenbankSichtenkonzeptTabellePixelGraphfärbungErhaltungssatzAuflösung <Mathematik>ProgrammierumgebungCASE <Informatik>GeoinformatikAttributierte GrammatikEbener GraphWahrscheinlichkeitsverteilungÜberlagerung <Mathematik>Digitale PhotographieMereologiePerspektiveKreisbewegungMAPStichprobenumfangMultifunktionBeamerUnified Threat ManagementComputerComputeranimationXML
Web SiteDatenbankFisher-InformationProgrammierumgebungErhaltungssatzAggregatzustandStandardabweichungLeistungsbewertungEbeneCAN-BusGeoinformatikPolygonGeradePunktPerspektiveStabErneuerungstheorieIntegritätsbereichChi-Quadrat-VerteilungBaumechanikComputeranimation
Transkript: Englisch(automatisch erzeugt)
Thank you very much. Welcome everybody to the last speech. I just want to acknowledge the other authors of this work, mainly Eugenio Berino, which is, most of this work was developed during his master thesis. Just a few very quick introductions of the table of contents,
so first of all I will give you some hints about the context of where I will work and the introduction to this. Then I will show you some details about the planning and the execution of the integrated geometric survey and the post-processing of the deriving data,
mainly related to the photogrammetric data processing and the orthophoto-generation. Then I can show you a procedure for the fruition of these products in a GIS environment and at the end of the presentation I'm going to draw some conclusions and future perspectives of the work. So first of all we performed a geometric survey campaign of Domus 5 in
Sula 14 region 7 of Pompeii archaeological park located near Naples in Italy. We collected a very high amount of images and then we processed them through photogrammetry
to obtain a 3D model of the site and then the orthophoto. Finally we make these products available through a GIS environment. All of these operations were conducted using free and open source software, in particular we used MicMac, CloudCompare and QGIS,
plus a non-open source software which is called Mago, which was developed by Sara Gagliolo, a colleague of Geomatics Lab in the University of Genova, which is able to produce high-resolution orthophoto. This work is born thanks to a synergy cooperation between different expertise,
in particular related to geomatics, archaeology and structural engineering, with different aims. So the archaeological point of view is more interested in the investigation
of the destination of use of the several rooms in the site. The structural engineering expertise is more interested in analyzing the state of the art in terms of structural safety also and to study the retrofitting interventions. And from our site, the geomatics site, of
we want to acquire a very accurate survey to produce very nice products to achieve the previous aims of the other two expertise. Here is the context we are looking at. So the dimension of the district area is about 60 per 35 meter square. The complex is formed by
three domus and 12 shops. It is located in the western part of Pompeii overlooking Via della Bondanza, which is one of the most important arteries of the site. The focus is in domus 5.
You can see here a zoom on this domus, which is also known as the house of the Queen of England, which is located again in the Insula 14, region 7. It is composed by 27 rooms. I just advise you that we are just focusing on one single room just for brevity.
Related to the survey planning, we use several and different geomatics techniques to have a check on the quality of the deriving product, of course, to exploit also their complementary features, so not to have holes in the survey in the products, to optimize also the logistics
and the timings of the survey operation because the site was open during our survey, so we have to also deal with this problem, and to have also the survey, which is
from a general view to an increasingly detailed one, so to produce a sort of nested survey zooming in the details of the site. This guarantees the completeness and also the control of the survey. Related to the survey settings, the survey was conducted about two years ago.
We used several techniques, as I have already mentioned. In particular, we use terrestrial photogrammetry, aerial photogrammetry, and laser scanning. Here you can see the settings, so the red box visualized the area where we performed an
aerial WAV survey at 40 meters above ground level. Then in the green square, we had an aerial and tilted survey with UAV again at 15 meters of height. Then in this smaller area, the laser scanning and terrestrial photogrammetry, this last one,
which was mainly related to the vertical walls, the frescoes on the walls, and is highlighted in this circle, is the room which is studied in more details. Again, also in this slide, you can see also the position of ground control points and checkpoints that were
used to put all the surveys in the same reference frame, and also the position of the takeoff and landing pad of the UAV. The employed techniques, again, we use WAV photogrammetry to a DJI Mavic 2 Pro to have a general overview of the site. Then terrestrial photogrammetry for
the vertical walls of Domus 5 through a camera, the Canon EOS 40D camera. The terrestrial laser scanner was performed to survey the interiors of Domus 5 interiors and to
collect the position of ground control points and checkpoints, we use both the GNSS in RTK mode and the total station. Here you have some very interesting details on the several techniques we use. I just want to underline the data set, which is very, very huge. We still not have
finished to process it, so it's a very, very huge work. Just to cite some numbers, for the UAV photogrammetry, we have more than 1,000 images. For the terrestrial photogrammetry, more than 7,000 images. 26 scans of terrestrial laser scanner and 25 collected ground control
points or checkpoints through GNSS and total station. Related to timings also, we have just for collecting the images, we have 2 hours for the UAV photogrammetry, 16 hours
for the terrestrial photogrammetry, 8 hours for the scanning and 16 hours for the several stations of GNSS and total station. Here you can find some nice pictures that can help you to understand in which very nice environment we worked. We have the privilege
to work when also where the site was closed, so very early in the morning and very late in the evening. Some of the areas were open to the public where we were surveying the site, so we also have to face these, the people that were inside the site and not disturb their visit
so much. Regarding the photogrammetric data processing thing, we use the MicMac, which is a very well-known open source software, which has very nice advantages related to
the rigorousness, maybe mainly in the external orientation and internal orientation parameter estimation and in the death matching algorithm and also gives the user the possibility to choose the homologous point search criteria. It has also some disadvantages of course, so for example no graphical interface and the user should be quite experienced to make it work.
The product we obtained, the process we obtained is a multi-scale multi-resolution approach which is able also to minimize the outliers and also the noise of the generated point clouds.
The workflow is here listed very quickly, so we extract the tie point, we estimate the camera positions and we generate the 3D point cloud through a dense match algorithm. For our case study, we used the following processing parameters, so we limited
the search of matches into 20 adjacent images because we have a strip geometry. We made some tests to understand which is the best timings and the best number of images to be processed together and we found that the best images number is 500 images which took about
24 hours of processing and we choose to have 100 common images between two adjacent blocks to get the blocks together once they are processed and we consider just one room
just for this example. Here you can see two blocks which are formed by 500 images and 500 images and this part is the common part formed by 100 images. The scaling and the referencing of the obtained point cloud was done by the ground control
points coordinates. In this case, the ground control points are coming from the natural points which are collected in the laser scanner point cloud and which coordinates are extracted using the open source software cloud compare. Then we use this unpronounceable command of mic mac
to collimate this point in the images. To make this command work, we need a sequence of images where the points are digitized and a txtv file to listing their coordinates.
As an output, we obtain an XML file which contains the points and the coordinates and then we can use this output to apply the roto-translation and the scaling to the entire model. Here you can find the location of the chosen point. We choose for example 15 points. You can
see here in three different rooms also to give a quite robust distribution of the points. These are the points in the laser scanner point cloud and these are the points, the same points of course in the photogrammetric point cloud. We check of course the results we obtained comparing the two point clouds. So we obtain
about one or two centimeters in the single ground control points except for the ground control points number six that should be removed because of a very high residual with respect to the others. We aligned the point clouds through cloud compare as I already said
and we also perform a distance computation again in cloud compare through the m3c2 algorithm that gives us the senior distance between the two point clouds the laser scanner one and the photogrammetric one. To compare the two point clouds we use this algorithm
as a reference cloud we choose the laser scanner one and with a normal direction horizontally oriented and we perform this test in a portion of the central world fresco. Here you can see the photogrammetric point cloud which is spacing of four millimeters the
laser scanner one which is spacing of one millimeters and the senior distance. Here you can find the distribution of the distances in the points and we obtain an average distance of about five plus or minus five millimeters so we are quite happy of this result.
Concerning the photo generation we use this software which is developed by my colleague Sara Gagliolo during her phd course. I just want to acknowledge that she is the winner of the Autech prize in 2022 for this work so congratulations to Sara. The code is more
than 3 000 lines of code in C++ it has a graphical user interface very simple realized in QT and he also exploits the open source library OpenCV mainly related to the matrices and images
management. The main feature of Mago is that it is able to overcome the approximation that typically is introduced by a mesh as a representation of the object so we can produce through Mago high resolution or two photos with the maximum resolution equal to the ground
simple distance. The workflow is very simple you can find it also in some references you can find in the paper so the first step is the definition of the orthophoto plane which is parallel to the whole plane then the acquisition of internal and external
orientation parameter of the image you want to produce the orthophoto, the definition of the orthophoto dimension and resolution and the automatic definition of ancillary reference system which is useful to make also to the user understand the position and the visibility of the
points. Then there are other two steps the first one is an interactive automatic process that is able to determine the best plane given by three points that is defined by the intersection
of the collinearity rays and the point cloud and then the procedure automatically generates a mesh directly from the point cloud so you don't have any further simplification or resampling but you just build an adaptive mesh at the highest possible resolution. Finally the color of each
pixel is projected into the image into the orthophoto map. Here you can find the one of the latest update of Mago which was updated to produce all sorts of photos for non-complainer planes
so you when you have walls that are forming an edge so this is done by introducing a rotation to place the two walls along the same the same plane so you can see for example a perspective view of a wall here and another wall here which they form an edge here you have the
orthophoto of the first of the first wall on the second wall and then you can put all together just enrolling them. Regarding the fruition of this product we created a QGIS project to include both the planimetric view and the
ultimate distribution of the data so we in some way created a 2D plus one GIS environment. Here you can see for example the 3D perspective view of three walls of a room and then we develop this cube into three planes where we have an x-axis
okay related to this to each orthophoto and then a y-axis that is the z-axis in the 3D environment. This procedure was realized by through a master slave architecture in QGIS so we are able to manage two different reference frames the first one which is the traditional
planimetric one where you can visualize the xy coordinates and then one related to the vertical plane of the walls so you can display the orthophoto and also have some information about the e-artimetric data. The master project is dedicated to the xy plane
and contains some polylines representing the in this case the perimeter of the almost five room walls. Each slave project is dedicated to a specific walls as in it a corresponding orthophoto projected in a xyz plane and it is
connected to the master project through a QGIS action. In reality we produce two QGIS actions the first one is very simple so you click on the on the borders of the of the room
and you open the corresponding orthophoto and then to this instruction you can open the slave project so you just have to simply insert the a window action type connect the path to the QGIS x and the path to the project folder and then we produce the
column name project which is the project you want to point to to be open. Each slave project contains the orthophoto of the walls of course and three shape files
one point one line and one polygon shape file. These three shape files are set in a way that the attribute table is automatically updated once you insert a new geometry so you have a very quick example of the slave project related to a wall
and then some details of the table so the table is updated to have the image coordinates of the shape file and also the real world coordinates so the etref
utm33 x y and z of the clicked point on the orthophoto through this relation you can see them in the in the paper so this is just for for the for just an example the same for the length the line shape file so we automatically create a line and so we can compute
its length automatically and the same for the polygon shape file so we can compute the area and the perimeter of this shape file automatically. We also perform the
supervised classification of the wall based on the state of conservation using the very well-known eugene sig and igmax leak grass commons so in the table you can see the area cover in both in a percentage and in square meters the areas the training areas are user
defined but you can of course obtain this kind of classification and finally we verify the planarity of the wall again by producing a dsm of a specific part of the wall and just computing the distance between this produced dsm and a vertical plane so we obtain
that there is a sort of deviation which can be may be related to a not vertical reference frame we have to deepen this aspect again so just for concluding very quickly we
showed the contribution of geomatics to archaeology by this processing and survey techniques and innovative approach we performed a nested survey and also nested fruition of products in some way through this master slave architecture which is able to give to the user
the possibility of making measurements on orthophoto and also visualize them we very quickly see the classification orthophoto and the evaluation of the deviation of the wall and the main point in my opinion is that a structure like this is very useful for the
realization of a database of the entire site and also it can be used by non-expert users in geomatics which will be very interesting also for archaeologists for example. I thank you very much and I am available for any questions.