VivaCity Smart City Platform
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 95 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/15592 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Place | Nottingham |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Raw image formatEndliche Modelltheorie2 (number)Structural loadIdentity managementMultiplication signAuthorizationMetropolitan area networkMereologyPlastikkarteOpen setComputer animation
00:42
Transportation theory (mathematics)Source codeOpen setOpen setLevel (video gaming)Computer programmingDialectTransportation theory (mathematics)Moment (mathematics)Service (economics)Set (mathematics)Connected spaceDigital photographyCentralizer and normalizerComputer animation
01:54
Semantics (computer science)Process (computing)Database normalizationData modelDimensional analysisSpacetimeInfinite conjugacy class propertyMereologyCache (computing)Sign (mathematics)View (database)Physical systemSet (mathematics)InformationService (economics)Game theoryData managementCovering spaceRow (database)Binary fileElement (mathematics)CodeLevel (video gaming)Cartesian coordinate systemNormal (geometry)SpacetimeDimensional analysisLatent heatAddress spaceAreaMultiplication signReal numberLibrary catalogProcess (computing)Semantics (computer science)Food energyOrder (biology)Figurate numberAuthorizationNumberRight angleComputer animation
06:26
OntologyDublin CoreStandard deviationSpecial unitary groupConvex hullVarianceExecution unitMaxima and minimaElectronic mailing listLarge eddy simulationSummierbarkeitSoftware engineeringValue-added networkChi-squared distributionSign (mathematics)SineMUDDedekind cutOntologyLevel (video gaming)Relational databaseSoftware developerNeuroinformatikGraph (mathematics)Latent heatInternet service providerKey (cryptography)Event horizonInformationCondition numberTable (information)Semantics (computer science)Linked dataFood energyComputer animation
08:07
InformationContext awarenessConvex hullSummierbarkeitScalable Coherent InterfaceOntologyDimensional analysisTable (information)Query languageInformationLevel (video gaming)Multiplication signContext awarenessMappingFrequencyService (economics)Physical lawEstimatorGraph (mathematics)Computer animation
09:01
ArmInformation managementMaxima and minimaRegulator geneSatelliteInformationPlanningTable (information)Element (mathematics)MereologyProjective planeMessage passingService (economics)Standard deviationSpecial unitary groupNumberFunctional (mathematics)Data managementData structureGeometryMusical ensembleWeb 2.0Point (geometry)Goodness of fitOntologyLevel (video gaming)SpeciesSingle-precision floating-point formatNetwork topologyPresentation of a groupComputer fileConnected spaceRow (database)DatabaseGraph (mathematics)Computer animationEngineering drawing
11:39
ArmMaxima and minimaMUDInformationData modelMultiplication signSemantics (computer science)MathematicsTable (information)MappingCASE <Informatik>Descriptive statisticsBitInterpreter (computing)Moment (mathematics)CausalityGame controllerGame theoryMereologyMetadataEndliche ModelltheorieOntologyComputer file
13:19
Computing platformOpen setCASE <Informatik>AreaDesign of experimentsHecke operatorCartesian closed categoryRevision controlOpen sourcePrototypeComa BerenicesMultiplication signMetropolitan area networkAreaRevision controlSet (mathematics)Data managementOnline helpMoment (mathematics)Decision theoryMessage passingBitINTEGRALInsertion lossGreatest elementComputer iconVotingQuery languageInformationMereologyOpen sourceMotherboardGraph (mathematics)Server (computing)Front and back endsSlide ruleTheory of relativitySource code
16:42
Interior (topology)Table (information)Dimensional analysisOntologyQuery languageLie groupMaxima and minimaPolygon meshNumberBus (computing)ArmCurve fittingSweep line algorithmTunisUniform resource nameQuantum stateExecutive information systemMaß <Mathematik>Software engineeringMetropolitan area networkHill differential equationDirected graphLine (geometry)Electronic meeting systemExecution unitChemical affinity3 (number)Sign (mathematics)Bus (computing)Endliche ModelltheorieSet (mathematics)Revision controlOntologyWorkstation <Musikinstrument>DatabaseLine (geometry)Graph (mathematics)Level (video gaming)MereologyTerm (mathematics)Self-organizationLatent heat1 (number)Physical systemQuicksortDimensional analysisExtension (kinesiology)InformationBitTable (information)Decision theoryCubeElement (mathematics)Normal (geometry)Semantics (computer science)Query languageData integritySelectivity (electronic)Food energyMatching (graph theory)SynchronizationMessage passingClassical physicsPredictabilityHypermediaMoment (mathematics)Event horizonState of matterTheory of relativityArchaeological field surveyInsertion lossPhase transitionFrequencyProduct (business)Prime idealStatement (computer science)Computer animationEngineering drawing
22:55
Parity (mathematics)Electronic meeting systemCurve fittingEmailArmMaxima and minimaInstallable File SystemLie groupExt functor3 (number)Executive information systemMetropolitan area networkConditional-access moduleUniform resource nameElectronic data interchangeHill differential equationInfinityMenu (computing)Bus (computing)Interior (topology)Limit (category theory)4 (number)MereologyPhysical systemMultiplication signMetropolitan area networkMatching (graph theory)InformationExtension (kinesiology)Web 2.0Computer programmingSoftware developerProjective planeIntegrated development environmentDisk read-and-write headHeat transferTransformation (genetics)Total S.A.State of matterStructural loadFigurate numberComplex systemMusical ensembleOperator (mathematics)Digital electronicsElectronic program guideLatent heatRule of inferenceMetadataDescriptive statisticsSet (mathematics)Open setFile formatBitDataflowText editorGame theoryArithmetic meanOntologyDifferent (Kate Ryan album)Element (mathematics)Computer fileMappingInclusion mapPresentation of a groupComputer animation
29:03
AreaMultiplication signInformationEndliche ModelltheorieMatching (graph theory)Figurate numberOpen setProjective planeTerm (mathematics)Physical systemMultilaterationGraphical user interfaceLattice (order)Moment (mathematics)Insertion lossMetric systemWritingFrequencyNumberLatent heatMathematicsStandard deviationMereologyTime zoneVideo gameReading (process)Game theoryView (database)Sound effectPhysical lawBuildingSoftware testingMilitary baseDifferent (Kate Ryan album)Cycle (graph theory)Forcing (mathematics)BitQuicksortRight angleCovering spaceData structureHeat transferINTEGRALCasting (performing arts)Revision controlMetropolitan area networkScaling (geometry)MappingService (economics)Presentation of a groupTheory of relativityTable (information)VideoconferencingSet (mathematics)Visualization (computer graphics)Level (video gaming)SoftwareState of matterSpherical capProduct (business)Disk read-and-write headDialectBroadcasting (networking)Medical imagingWave packetNetwork topologyCrash (computing)Computer architectureCASE <Informatik>Sign (mathematics)WordNatural numberLimit (category theory)Kolmogorov complexityWeb 2.0MeasurementPoint (geometry)Real-time operating systemUtility softwareSoftware developerPlanningAxiom of choiceTelecommunicationMIDIPhysicalismConnected spaceBlock (periodic table)Dot productOntologySlide ruleLaptopTwitterWebsiteClassical physicsArithmetic meanGoodness of fitInternetworkingDivisorBus (computing)Data conversionSpacetime1 (number)Query languageWater vaporProcess (computing)Transformation (genetics)Plug-in (computing)Decision theoryFitness functionNatural languageNormal (geometry)Computing platformOpen sourceOnline helpOrder (biology)Descriptive statisticsAttribute grammar
Transcript: English(auto-generated)
00:10
Hi everyone, I'm Marco. Hi, I'm Marco. And, well, I'm going to talk about a different
00:23
way to talk about smart cities because many cities are publishing lots of data. I'm part of Open Knowledge Foundation Italy and I'm working with many Italian municipalities to work on open data. And we all love open data. It's great. It's a great way to get
00:47
transparency from the community, get transparency from almost anything. And open data are really
01:02
one available at the moment. Lots of data sources, lots of incredibly growing movement and then more and more cities, regions, entities all over the world are giving out data. So we get lots of geographic data sets, lots of geologic data sets and public transportation
01:21
is now ever more interesting. Is it jumping somewhere? Earlier that was happening because the connection was somewhere. Yeah, yeah.
01:40
Italian. The whole open data is so cool. O quanto sono veri diopendata. In Italian. Grazie. But there's a catch. In specifically, this is a photo of Central Park and there's a
02:07
catch and I want to show you what the catch is about looking at parks. Because looking at the data catalogs from three cities, New York, Chicago and Bologna where I come from, we see that there is something really strange going on. New York City parks have
02:27
this level of detail, organization, status, type and whatsoever. Chicago, yeah, we can't even read it, the level of detail. Bologna. I mean, it's part of the game. I mean, everyone publishes the information he or she
02:47
has. So, New York has obviously a way to look at parks in a more management kind of way because it has information about jurisdiction, waterfront, map, if it's map or
03:03
not, the borough, the precinct and whatsoever. In Chicago, we can't read it, but there's a whole level of detail on the specific services available or areas or a lot of detail.
03:21
So, if you want to try to connect the dots and see what kind of information matches those views of the park concept, we have to see that, for example, the precinct or the sign name are connected to the park name in Chicago and the norm in Bologna.
03:45
And the ID, the specific ID of the single row is once it's in gis.prop.num, here it's another code and here code underscore ug. It's terrible.
04:04
So, it's all about semantics. If we look at a data set and we don't understand the columns, we need to develop something around it to understand how we can be able to manage it. We know how we could do that. Having an application for Chicago, we would take
04:26
the data sets, work on that, know the column names and write our code around those column names. It's easy. It's elaborate, but it's easy. But let's say we want to try to take a data set from New York. We would have to add a normalization process, for example,
04:46
because the address, the written address is not exactly the same format. So, we would have to really create a complete re-elaboration of the data. And it's, again, pretty easy,
05:01
but not, but quite elaborate. And doing that once means that you have to do it for every data set you want to add to your system. Or we could start looking at the whole problem at a higher level. It's all about dimensions. We have time, we have
05:22
space, and we have a topic of the data. Time, it's easy. We know times align more or less, so it's pretty easy to manage. Space, we love space because else we wouldn't be here.
05:40
What the real problem is, is the third part. It's the topic problem because there are so many topics that are covered by open data. And there are so many data sets available all around the world that it's pretty, quite impossible to understand exactly what
06:00
a specific data set covers. Again, if we talk about parks, everyone has a different view on what parks are. And that goes from parks to recycle bins to, you know, any kind of element. Probably nobody in this room would agree on what a door is, so it's
06:27
all about ontologies. We have lots of ontologies, explain almost any kind of topic. Not every, but many. We have the dNTF for specific computer infrastructure ontologies.
06:44
We have Inspire, we all love Inspire, more or less. We have Dublin Core, we have Friend of a Friend. Do we need more? We always need more ontologies because we always need more ways to describe the world in a coherent fashion. What we see behind the whole text
07:05
is the linked open data graph, the linked data graph specifically. Which has lots of data providers and ontology definitions that are interconnected. And
07:23
as such, the whole discussion of ontologies means basically it's like having foreign keys in our relational databases. So if we can get the ontologies into the whole discussion on semantics, we can be basically ready to do something way more interesting with our
07:46
data than we could before. But in the end, this is all a discussion for developers and coders. What in the end is true is visualizing the data and an end user doesn't like a table. It's terrible because this is not, this is just data, this is not information.
08:07
What a user wants is a map and has always wanted a map. And because a map gives you the context for the information. And giving you the context for the information, these are all maps of Nottingham in various time periods. Giving you the context for the information
08:28
enables you to understand the situation, to understand where you are and what the services are around you. And it enables you to do one more thing, to elaborate on that. This
08:41
is, there are many ways to elaborate on information. There's MDX for business intelligence, there is SPARQL for the graph world, there is SQL for the relational world, and there is WFS for, to get the specific features. And this is great because basically what
09:02
you can get is infographics. You get the possibility to do aggregations, to do aggregations and get directly into something like a city dashboard where you can get more information
09:20
than you could ever get from just having one element in a table. And you can think about planning and only knowing every part of the city enables you to do that. So here comes the whole Vivacity project. The Vivacity project starts taking this information,
09:42
these very simple rows of CSV files, takes the concept of an ontology. This is a very simple ontology for our park that I wanted just to show you the functionality of Vivacity.
10:02
It says, yeah, it's unreadable. Okay. It says park, tree, species because in Bologna we have specific details on the single trees available present in every park. And
10:24
then there is management of the park and phone number because we have this information almost available here without these... The problem with these CSV files is that there is no connection between the parts. So what happens? This happens. Vivacity uses a graph
10:49
database as a backend. So the information is connected in a way that enables the user to reconnect back using only the ontology as an entry point. For example, knowing
11:06
this structure, we could just ask the database what the telephone number is of each of the parks and show it on the map because we know the geometry. From there we get to here,
11:22
we get to the borough. The borough is this and we have the telephone number which is connected to here. This way we just ask a question about ontology and we get the answer on the specific city. This generates obviously a few problems because, yeah, it's
11:43
not fun and games. And the raw data, the ETL is complex and we thought that having just an ETL, taking only CSV files and Excel files and tables anywhere or shape
12:04
files, basically structured data with meta information is one part but it needed something more because many cities are starting to publish APIs to get access to their information. So the raw data can be taken directly from APIs with this meta information and description.
12:27
The raw data collected is kept and is versioned so that we can see a given moment in time what the situation was in a very specific moment. And then there is semantics.
12:41
The semantic part is basically an interpretation of each column and based on the ontologies that are given. And every change in a single dataset, we control the meta information about the specific dataset, every change in a dataset creates a new semantic model
13:05
and the user has to intervene and change the mappings. But basically, it's already, it enables just, I mean, it requires the user intervention just in this case. It's not just
13:22
a front-end, it's not just an uninforming tool. It's a way to understand and really help the understanding of the data. In the end, it's a way for the city to become not just a producer of data and just someone who has the information and just gives it away,
13:45
but it becomes an integrating part of the city decision making and most importantly, the integration of datasets and APIs enables the city to really understand what's going
14:02
on. The stack, vivacity 1.0 had to be presented last year in Beijing. Didn't make it. I mean, Beijing didn't make it. Vivacity did. But yeah, okay. Anyway, it was based on OpenLayers
14:21
2, Django and Postgres, PostGIS. It was just a prototype, very slow, incredibly slow because putting a graph inside PostGIS, don't do that. I mean, now the new version is quite good, but yeah, last year wasn't that great. Now, with Vivacity 2, we changed to
14:43
Leaflet and maybe soon into OpenLayer 3, hopefully. Again, Django as a data manager, it exposes the APIs and as I said, I spoke about MDX, Sparkle, SQL, and WFS. These are all supported
15:04
by the backend, so Django interprets everything and manages to do the queries and transform the queries in the specific queries to the various backends. Now there are two backends, MongoDB for the document approach, let's say, from the bottom up, and Neo4j for
15:22
the relational part, Neo4j Spatial, to get the relationship between the resources in the map. And yes, it's open source. It will be soon in November, by end of November.
15:42
At the moment, it's just... I wanted to show you a demo, but yeah, the server farm where it is in Germany just said, your motherboard has exploded. Okay. Okay, no problem. That's why we use server farms, right? It's somewhere else, someone has to do it. I can't
16:05
make the demo, sadly. But there is version one on GitHub, and it's a prototype. It doesn't work at 100% as a prototype. Yeah, and that's it. That's it, yeah.
16:29
Hello, I'm Tamara Colby. I would like to see if you could go back to the slide that had all the data that looked like newspaper graphics or something. Yeah, the one that
16:48
one. Okay, so that one's sort of like an example of how you can be a data integrator. And could you just talk a little bit about what those tables represent, and also how
17:02
these are used in decision making for citizens and urban matters? I mean, good question. This is just an example of what can be done. Basically, these are all aggregations of the information available. The more information is put into
17:26
the system, the better the ontologies represent the, I mean, the better the semantics represents the whole system. And as such, you are able to define specific aggregations. I don't
17:40
know, has anyone used MDX and business intelligence tools? Okay. Basically, what you can do is that is you can, you have, imagine a cube of information. Only, it is not only three dimensions, it is all the dimensions you need and you think you need.
18:02
As soon as you start working with lots of dimensions, you have to really understand how to get to that element you really want to look at. And MDX does just this. It's like SQL, normal database query. Only, it enables you to slice this cube and take
18:23
only the parts that you really want to work on and then aggregate at the end with only a selection of elements. For example, you could say one dimension is, let's say, it's unreadable here and it's unreadable here. Great. Let's say, let's make an example.
18:49
Usage of buses. You know where the transport, where the bus stations are, you know where the, how the buses, the lines work. You know, if you get to the level that you
19:03
know every user, when a given user uses a bus, you can be able to start aggregating on that level of detail. For example, saying how many users use that specific bus stop. And as soon as you do that, you are able to create a model for specific,
19:26
for the bus stops. And this model enables you to then understand how many bus stops you really need. And this can all be done through this MDX queries. Maybe they are
19:44
complex, maybe sometimes they are slow, not always that fast. But it's part of the whole idea is that usually MDX uses specific databases to work. And applying this, the
20:03
MDX model to a graph database is a completely new approach, just a very small literature on that, yeah, because it's a very new line of experiments.
20:22
Actually, my question is facing a similar challenge. The problem is that we ruled out the use of ontologies because the risk was ending up with one ontology for every data set, because basically in the human sciences, whatever, it's so vague a term
20:44
that there is no defined set of ontologies. So you ended up with having one ontology for every data set, for every organization to get the data from. Is it something, I mean, have you experienced it? I understand it's just version one, but how many data sets have you already
21:05
experienced in this? Thank you. Thank you. The problem of the data set is that in fact there is no real consensus of what is a good, the problem with ontologies, sorry, is that there is no real
21:23
consensus that this ontology is good enough for solving this problem and everyone uses that ontology. And what we did was using the most used ontologies and trying to work on them, to work just with them. As soon as
21:44
someone gave us data that didn't respect that ontology, there was two-sided work. On one side, we evaluated the specific data set if it was sensed to elaborate it
22:04
and try to bring it towards that ontology. In other cases, it didn't make any sense, so it was basically an extension and the whole thing is that the system itself contains in part extensions to these, to the basic ontologies given by the
22:24
classic, I mean, the standard ontologies and there is a small extension created by us just to map the additional information. And sometimes some of the information is just mapped between the ontologies.
22:47
We have the, at the moment, it's a good question, we have around 30,
23:01
we have the data sets of Bologna installed that are around 50, 60 data sets. Municipality and a few entities around the municipality, transports, all entities
23:25
starting to push open data. In Bologna area, we are starting to confront with other data sets and, most importantly, the CCAN and SOCRATA data collectors
23:41
that have a really nice API to get directly to the metadata and possibly starting to import their data sets soon, meaning New York, Baltimore, Ann Arbor and anywhere. That would be a nice and interesting experiment because then we will have really, to see what kind of problems, I mean,
24:06
we think we solved, yeah, yeah, exactly. We found that with the data we were working on, we had some problems but we were able to solve them pretty fast. Yeah. Can I just grab that? Thank you.
24:21
Sorry, just for the next question, we've got about five minutes left before the next presentation. There's due to be a presentation after this one, I've been told that for us to sort of skip to the program because the one after this is canceled, we're going to have to get after Niko, after yours, Chris, after, so I was just giving everyone a bit of a heads up about that.
24:41
But we can carry on with the questions in the meantime. So, sorry, who got questions? So, I guess my question is, so you have Bologna, so you have a starting, you have some ontologies specifically for Bologna. When you add New York, there's going to be additional data elements,
25:04
some data elements are going to have to be transformed, you may have some data per capita and you have to transform it to a population, something like that. So, that's part of the work you do every time you add a new data set, there's going to be some semantic mapping that you have to do, you're going to have to perhaps extend the data model,
25:22
I mean, this is structured data, we're not talking about unstructured data. So, I mean, there's no magic bullet here, I mean, it's work you have to do, right? But the idea is that you're going to come up with an ontology, over time, you're going to come up with an ontology that will be able to include Bologna, Beijing, New York, what have you.
25:40
Yeah, exactly. There is one additional aspect, thank you, thank you, thank you. Yes, exactly, there is one additional aspect, the transformation part of the data from the specific format to the ontology-liked format,
26:03
the idea is to have that easily created by anyone, meaning to have a small flow editor that enables you to just do the basic operations. In fact, that part is still under heavy development
26:25
because we're evaluating even the possibility to work with Google Refine and Open Refine that have a great, great tool to elaborate the transformation of information and the data sets,
26:43
because, and having that would help us a lot because Google Refine enables you to export the transformation that you make. Who knows Google Refine? Okay, it's an amazing tool, now it's Open Refine, it enables you to basically elaborate CSV files and Excel files and anything,
27:10
and clean up your data. For example, suppose you have data collected by someone over an enormous amount of time,
27:21
and a given road has one name, but it's spelled wrong many, many times, so Open Refine just takes the deck column and says, hey, maybe you meant the same road, and you just can clean up everything just before you put it
27:43
into a more complex system to enable evaluations. So we're thinking about integrating that into the system, into the platform, so that really it's an easy experience to clean up the data, prepare the transformation, and prepare the mapping, and then have everything already running.
28:03
Don't you have to do the same thing with spatial data? I mean, using a tool like FME, basically a set of rules to transform data into something else. Yes, we do. We do, and that's part of the game. I mean, there is already, we have been looking at Open Refine
28:22
because there is a tool already, an extension for Open Refine that enables you to already do at least part of that, but yeah, that is one of the issues. And I didn't even talk about the problem that Bologna has, living in Italy, Bologna and the city beside Bologna
28:40
have different projections in data sets, so yeah. That's, yeah, it's great. I mean, yeah, we have a, no, a sad story. That's okay. Well, you're welcome to say that.
29:02
We're coming up to when the next session is due to start. Now, we don't have one in here. If anyone's wanting to go to a different talk, there's a few different ones on. Alternatively, Mark, do you want to take some more questions or do you want to have a break? Mark, no problem. You're okay? Do people want to ask a few more questions or hear the story?
29:20
It's hardly up to yourselves. Okay? Come back after you, then, if you want some water. Yeah, okay. No, basically, the problem with the projections in Italy is that we had, up until last year when the European, there was, the European standard was set with ED50,
29:47
almost every region had a specific projection, and sometimes even a modified version of classic projection then that made everything worse for people who were working on the data
30:02
because there are some regions that are overlapping into other, not time zones, meridians. So, basically, the government chose to get part of Italy
30:22
down to Africa as measures so that everything would be on the same side. You mentioned OGC standards on one of the slides. So, for spatial representation, does that mean that you're using GML as the way of, your standard way of representing spatial data?
30:47
Yes, standard way, yes, because that's what more or less WFS supports. So, the next question is, you're talking about a whole city, so you know there's something called city GML, which is designed,
31:03
it doesn't do very well inside buildings, but outside buildings, and it's been extended to include utilities and so on. Is that sort of the longer term model that you'd like to fit into? Yeah, yeah, that's the, yeah. We have a project working, no, no, we have a discussion going on
31:26
with people working already on city GML and, yeah, that's, yeah, it's a little long term, but mid to long term, but yeah, that's part of the plan, yeah.
31:40
Do you want the microphone? Well, it's part of the, I mean, it's part of the blocks scale. You can do open source and so on, then load, then exploit the mapping
32:14
and do the transformation for the data set, so every time you have data,
32:23
you can transform by the thing. Yeah, the idea is exactly that. I mean, you do the mapping, as I said, once the semantic is given for a given data set, that is kept and you don't even have to tell the system he has to go and get it.
32:41
The idea is to have it automatically at least once every, say we have a data set that gets updated every week, once every week you get the data set into the system and it's up and working, possibly, hopefully. Do you think that it would be also possible to integrate some management models?
33:18
At the moment, it's just a container with APIs,
33:26
but the idea is to be able to develop plug-ins to have the APIs, I mean, already in the system to be easier to access and faster, possibly, possibly. Yeah, but the idea is, again, plan, this is not long-term but mid-term,
33:43
tendency even short-term, but yeah, it's part of the game. Do another use case scenario using something like water?
34:06
For the aggregation. For example, in this moment, we have in Italy, I mean, it's part of the 2020 thing to have broadband in cities.
34:31
So, we're developing a model to see how much a city is valuable for a telecommunication company.
34:44
And to do that, we need to know lots of information, basically, amount of people in a given area, amount of infrastructures already available under the streets, amount of the kind of people, meaning the income, the mean income of a given area.
35:09
These are all factors to be considered in this evaluation. And at the end, we can give a specific value to different zones.
35:22
And through this system, basically, it's a project that's just starting because, yeah, Europe. Things are not always as fast as we would love them to be, but the whole project is just starting, but we were able to define two or three areas
35:46
that big telecommunication companies could be interested in investing in fiber optics and installing the infrastructures. And two of these areas will be starting in a few months.
36:02
So, yeah, it's a model we are using. The whole aggregation on open data part is something we are using, and it's useful. It's only, sometimes it's difficult to find, to really see the connection of information.
36:25
And that's in part why we wanted to do this project. Because as soon as you see how information is connected between the, how the dots are connected in the city, then the aggregation is simply deciding
36:42
where to go and cut to see how the cake is made. And as soon as you see that, then the aggregations are immediate because you see the structure, you see the stratification of the city, the services, the infrastructures.
37:00
And you can see, suddenly you can even see where a city is tendentially going to grow because you see the infrastructures, the transports, the quality of living in that area, the services present, kindergartens, and whatsoever, and service schools.
37:26
And you really understand how the urban fabric is created. And as soon as you get that, the aggregations are part of the game. And the good thing is that having a standard tool, a standard language
37:44
for these kind of aggregations, you basically can use a graphic tool to just play with them. And it just creates new tables. Then the problem is really understanding those tables with those numbers, but it's something really interesting because, yeah,
38:03
it enables you to really have a playground to work with. It's like SimCity only with real data. I'd just add that it allows you to compare cities in the sense that you can compare the quality of the bus system in Bologna
38:22
to the quality of the bus system in Milano. And you can say it's much better in Bologna and it's much cheaper in Bologna. What's going on in Milano? Having all the information connected enables you to do metrics. It's an interesting book.
38:41
Your example is around parks. I work at Birmingham in the UK City Council, and I seem to have spent days just trying to discuss with our parks team what's a park, what's a recreation ground, what's an open space. This is just within one city municipality in effect. So in order to do that kind of international comparison,
39:01
these ontologies and the descriptions can be really, really important to allow that kind of comparison, I think. I mean, infrastructure, I suspect, is a little bit easier. I don't know, right, because I don't work in that sector, but use of space, that's a really difficult thing to compare.
39:21
Not that it shouldn't be tried, though. So I don't know, like your parks example there was mocked up. Comparing even Chicago to New York, I should imagine, judging by not just the attributes that are available, sort of saying which ones have got mini golf or whatever, it should be an interesting worldwide comparison, the best mini golf courses.
39:40
You know, that alone is getting into the fine detail. I think that's one of the factors we wanted to explore because that's, yeah, I mean, it had a very practical reason
40:00
that was the one I was talking about, the fact we wanted to evaluate the value of a city for the telecommunication. But in fact, it's just a metric. You say the value of this, you look at demographics, we're doing like a market research or a whole composite assessment,
40:23
and you say, oh, well, here's what we see on the map, here's where we see an underserved area. I mean... It's... So look, can you understand the drivers for your model? The answer is quite strange, so I'm just putting my hands forward.
40:43
I believe in Dr. House. He says the patient always lies. The patient lies, but data doesn't. Data doesn't, and data is numbers, and numbers don't lie, at least, hopefully.
41:03
You can manipulate them, but then we're talking about black magic. Data don't lie, they're numbers. And being able to create a model only on numbers without getting to evaluate the specifics of having people calling you
41:24
would you like to have broadband whatsoever has given us the opportunity to really thinking about what's... Yeah, changing that approach, because again, yeah, people lie.
41:41
And on the phone, they don't even have to give this a shot. Once everybody agrees on the basic data, then you can have a rational conversation about what to do. One thing you may be aware of is something called Urban Observatory, and one of the things that... The first thing I ever saw from Urban Observatory,
42:01
it's totally graphical from what I saw, totally graphical, was they compared parks in Paris to parks in New York or Chicago or London, and it was just incredible how much parkland there is in Paris compared to how much parkland there isn't in Chicago. And it's graphical. All they've really done is they've made sure that the area cover is the same,
42:25
so you're comparing apples to apples and not apples to oranges. But it's exactly the kind of thing you're doing, except they're doing it totally graphically. The idea is to be able to do that. The whole platform has the scope to be able to do that numerically if you need it so.
42:45
So, for example, having an endpoint for Excel to make MDX queries, and possibly graphically, because you could define a query beside the map, meaning in a side panel, you'd write your MDX query, run it,
43:03
and the data shows up on the map. Maybe not like that, because that requires normalization of areas, and that is probably easier to do numerically. But, yeah, the idea is exactly this, to be able to see what happens.
43:24
I mean, this slide deck is slightly different from the one I had before, because the one I had before for version one was deeply SimCity-based, because what's open data, if not the engine of SimCity?
43:44
And we've been, I've been working with the municipality where I live in. It's a very small town, 15,000 people. So, and we've been, I mean, the council, the city council,
44:07
really was amazed when they saw me playing at a, waiting for a meeting, playing SimCity on my laptop. They said, what's that? It's really cool.
44:21
And we started thinking of, that's the moment I started thinking of, hey, SimCity, why didn't you ever play it? I mean, you're managing a city. If you can't manage a simulated city, how can you manage a real city? I mean, there are more problems there, for sure. There's more complexity, but the model below is basically the same,
44:45
and the new version of SimCity is even more like that. It's even more a network, specifically a network of networks. And that's what cities are, and that's why, yeah, the whole thing started.
45:03
Do you want questions or comments? Yeah, I'll just add one thing, and that is, once you've got the data, you know, using a SimCity gameplay model is a really good approach, because it's something that a politician, non-technical people can do. You don't have to be an engineer to do that kind of thing.
45:22
That's one thing we did with some of the members of the city council, and they played, and it was great, they had fun, and they saw that they couldn't always manage the simulated city, but it was very fun, it was very great.
45:43
It's interesting, actually, from the SimCity point of view, the talk that we're supposed to be on now by Rob Hawkes, I don't know if you've come across the VisiCities, it's called VisiCities, V-I-Z-I, all one word. Well, certainly I know Rob Hawkes was a former developer at Mozilla,
46:04
and they're taking the likes of OpenStreetMap data purely for London, I think, at the moment, and OSOpen data. They've made 3D visualizations that work in the browser, but I think they were inspired by SimCity from what I understand,
46:21
as well as effectively creating this SimCity game with London as the SimCity. So it's well worth going away and checking what they're doing. I think it's still in beta, isn't it? Yeah, it's still in beta. I'm following some of their Twitter accounts at the moment, Rob Hawkes just put some images on, they're just trying to deal with the z-hide data.
46:41
So in London you've got roundabouts that are underground and stuff like that. So they're trying to deal with underground roundabouts combined with flyovers combined with various parts of the infrastructure, let alone the Timbs running around. On the website there is an amazing video of the underground map,
47:01
and really great, it's amazing, because it's a 3D visualization of the whole underground network, and wow, with real-time data on where the underground is in a given moment. It's really great.
47:23
The thing that I find interesting from the city council point of view is, in Birmingham we broadcast on the internet the planning committee. So you've got these people, untrained, elected members,
47:40
who are looking at 2D plans, perhaps either on a Google map or our own internal maps, and asking the kind of questions that the 3D visualization and the playing of the city would answer a lot of the times, how does the light affect things, stuff that you would normally model, but they actually just want something they can almost instantly just turn and play with,
48:01
and then they can make a more informed choice about the actual planning decisions, and particularly start throwing demographic data, that sort of thing, on top of the actual physical infrastructure and the physical effect of something going in. I was hoping to meet the guys from Visit Cities, but yeah. The guys from Visit Cities are great.
48:22
They're superstar web developers. Yeah, yeah. Amazing. Have you got any more comments, questions, or should we give Marco a break, because we've got ten minutes before the next session to start, or we can have a break? Well, I just want to say that one of the biggest problems with urban planning is in, let's just say, like you say,
48:42
land use and planning committee decision making, is it's a long, elaborate process because you have to integrate all this information, like the infrastructure and then the economics and the architecture and okay, and then you take every public comment that you're supposed to get,
49:01
and then you respond to every public comment you get, and after seven months, maybe you're just about ready to get your permit to build, let's say you were going to build a big apartment, a nice new thing, that's making ten percent affordable.
49:22
It just takes so long, and so what you're saying is this will provide a more integrated, immediate, integrated data and networks of data, showing all the ways that something is integrated into space and help people make better decisions,
49:44
and is it a better way to argue the pros and the cons of a certain decision? No, it's part of being informed. I mean, being informed is not just having that piece of paper, it's knowing all that's around the decision you have to make, and all the implications.
50:04
Thank you.