We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The Open Superverse: an open & interoperable geospatilly anchored digital ecosystem for all

00:00

Formal Metadata

Title
The Open Superverse: an open & interoperable geospatilly anchored digital ecosystem for all
Subtitle
Why Open AR Cloud wants the Digital Upgrade of Physical Reality to build on the best of the Open Web
Alternative Title
The open geospatially anchored superverse ecosystem (Open AR Cloud)
Title of Series
Number of Parts
295
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Augmented Reality when connected persistently to the physical world through 1:1 scale 3d real-time updated digital twins allows us to create a shared programmable space or superverse that lets us paint the world with data and let our digital lives escape from the small glowing rectangles into the real world around us and experience it together. This technology has been named AR-Cloud. In October 2018 the Open AR Cloud association (OARC) was formed to brings people together to build an open AR-cloud ecosystem that works for everyone, everywhere on every device and every platform while respecting the right to privacy, freedom, and safety of all the users. On the 12th of February 2019 12 working groups is being formed to solve some of the hardest problems in spatial computing. The talk talks of the promise and peril of AR-cloud technology and OARC ongoing work to try and make this technology reach its potential to do good in the world. We think this is best done through open standards processes, open source, transparency and respect for the digital rights of individuals.
Keywords
FingerprintDigital signalOpen setStack (abstract data type)Point cloudWorld Wide Web ConsortiumOpen setData managementSelf-organizationComputer animationLecture/ConferenceMeeting/Interview
Digital signalStack (abstract data type)Software developerOpen setPoint cloudReal numberService (economics)Software developerNeuroinformatikReal numberSoftwareMultiplication signInternet service providerStandard deviationCrash (computing)Computer animation
ExistenceFood energyPoint (geometry)SpacetimeDot productArtistic renderingComputer animation
Food energyVideo gameKritischer Punkt <Mathematik>SpacetimeSinc functionMultiplication sign
Atomic numberSpacetimeVideo gameFood energyMultiplication signBitWeb 2.0Point cloud
Goodness of fitInheritance (object-oriented programming)ComputerSelf-organizationMultilaterationReal numberObject (grammar)SpacetimeAtomic number
State of matterPoint cloud3 (number)Programmer (hardware)SpacetimeAtomic numberObject (grammar)NeuroinformatikOrientation (vector space)Coefficient of determinationData storage deviceCompass (drafting)Autonomic computingService (economics)Channel capacityPoint (geometry)Position operatorVirtual machineMobile appGame theoryMehrplatzsystemPixelInterpreter (computing)Very-high-bit-rate digital subscriber lineDemo (music)Physical lawRange (statistics)NavigationState of matterNumberElectronic program guideContent (media)MassInternationalization and localizationSimilarity (geometry)Subject indexingVisualization (computer graphics)Traffic reportingPoint cloudRevision controlPrototypeSoftwareAsynchronous Transfer ModeDiagramMachine visionMoore's lawRoboticsOpen setInheritance (object-oriented programming)Message passingCollaborationismWeb pageOptics
State of matterPoint cloudDigital signalReal numberInformation privacyTablet computerTexture mappingMathematicsLevel (video gaming)CollaborationismRepresentation (politics)MereologyTraffic reportingType theoryReal-time operating systemRoboticsFluid staticsWeb pageMappingDiagramAutonomic computingGreatest elementMotion capturePosition operatorOrientation (vector space)Data structureGroup actionBuildingPoint cloudOpen setVirtual machineBasis <Mathematik>Machine visionElement (mathematics)AlgorithmProgram flowchart
State of matterPoint cloudObject (grammar)Core dumpElement (mathematics)Dynamical systemReal-time operating systemRoboticsComputer animation
Archaeological field surveyUniform resource locatorFundamental theorem of algebraGroup actionPlanningStandard deviationOrientation (vector space)Digital signalObject (grammar)Point cloudDegrees of freedom (physics and chemistry)NeuroinformatikOpen setObject (grammar)Archaeological field surveyOrientation (vector space)Real numberMereologyStandard deviationDigitizingProcess (computing)Position operatorPoint cloudUniform resource locatorWeb 2.0GeometryDecision tree learningGroup actionDrop (liquid)Computer animation
State of matterPoint cloudReal numberSimultaneous localization and mappingFormal grammarContent (media)Object (grammar)Cartesian coordinate systemSubject indexingMereologyQuicksortVirtual machineDigitizingService (economics)Panel painting
Information privacyReal numberState of matterPoint cloudDigital signalTablet computerComputing platformMereologyComputer hardwareType theoryCore dumpWeb 2.0Multiplication signProgram flowchart
Information securityInformation privacyVector potentialNumberMotion captureBus (computing)Machine visionCartesian coordinate systemState of matterWave packetIncidence algebraOpen setData managementMilitary basePrisoner's dilemmaLevel of measurementInteractive televisionUniform resource locatorCASE <Informatik>Goodness of fitVirtual machinePoint cloudType theorySpectrum (functional analysis)Computer animation
Standard deviationNumberOpen setInformation securityWorld Wide Web ConsortiumSelf-organizationGroup action
Open sourcePoint cloudOpen setPrice indexObject (grammar)ChainSource codeLogic synthesisSimulationOpen sourceOnline helpGreatest elementNeuroinformatikLibrary (computing)Artistic renderingSet (mathematics)GeometryComputer hardwareSoftwareFreewareComputer animation
Open setPrice indexPoint cloudSource codeObject (grammar)Open sourceChainLogic synthesisSimulationOcean currentAutonomic computingOpen sourceComputer simulationArtistic renderingOpen setSimilarity (geometry)Vector potentialMachine learningState of matterVirtual machineService (economics)Multiplication signGame theoryCore dumpLattice (order)Object (grammar)Subject indexingContent (media)NeuroinformatikNumberSoftware developerKey (cryptography)SummierbarkeitGoodness of fitCloud computingPoint cloudGeometryConstructor (object-oriented programming)PlanningComputer animation
Constructor (object-oriented programming)Real-time operating systemContext awarenessDigital photographyWebsiteOffice suiteGame controllerTelecommunicationAutomatic differentiationPhysicalismDirection (geometry)Different (Kate Ryan album)Endliche ModelltheoriePersonal digital assistantVector potentialMatching (graph theory)Price indexBuildingLaceFlow separationUser profileProjective planeInternationalization and localizationComplete metric spaceData storage deviceTwin primeSolid geometry
Personal AssistantAreaPersonal digital assistantEndliche ModelltheorieGame controllerType theoryAutomatic differentiationPosition operatorMatching (graph theory)Group actionGenderUniform resource locatorCollaborationismSurface
Form (programming)Projective planeNeuroinformatikUniform resource locatorGroup actionGame theoryCollaborationismCondition numberQuicksortBitReal numberMusical ensembleOpen setType theoryNP-hardImage resolutionComputer animation
Subject indexingProcess modelingTexture mappingInformation privacyContent (media)Self-organizationOpen sourceGroup actionPoint cloudInformation securityInternet der DingePoint cloudOpen setBitGroup actionImage resolutionOnline helpField (computer science)Real numberQuicksortRange (statistics)Expert systemNP-hardNeuroinformatikComputer animation
Point cloudState of matterEvent horizonOpen setVideoconferencingTraffic reportingInformationYouTubeRow (database)
Transcript: English(auto-generated)
Hello, it is an honor to be here at this great conference. My name is Jan Ergvinja, I'm the managing director of Open ARCloud.
That's a global nonprofit organization with a mission to drive open and interoperable real world spatial computing technology, data and standards for the benefit of all. I'm also a full stack developer at Nurkac.
That's a geospatial software and services provider in Norway. Now I will talk about something I think will be a major milestone for humanity. I'm starting off with a journey back in time.
13.7 billion years ago, space, time, matter and energy came into existence. The smallest dots on this rendering is entire galaxies containing billions of stars. Our solar system sits on the outskirts of one of those galaxies.
Planet Earth itself is like a tiny blue dot in our solar system. It's the home of every person alive and who has ever lived. As far as our best scientist knows, there is no evidence for life anywhere else in the universe.
What each and every one does now at this critical point in time might have a cosmic significance. It's been 4 billion years since life began on our planet,
but something perhaps equally spectacular is about to happen in our lifetime. Space, time, matter and energy and life itself is about to merge with the digital world.
We will be fusing bits and atoms together, uniting the physical with the digital in a shared programmable space. Like the emergence of life in our solar system, this merger could perhaps be unprecedented in the history of cosmos.
We haven't figured out what to call this digitally upgraded universe yet. Some of the suggested names are the AR cloud, the metaverse, the mirror world or the spatial web. There are many suggestions, only time will tell. I like to call it the superverse and for convenience and good fun,
I will use that name during my talk. Currently, people and organizations all over the world are developing real world spatial computing technology that connects the physical and digital world.
But be careful, we are at an existential crossroads. This is an opportunity to make our lives here on our planet much better or much worse. More on that later. First, let's look at what I mean by the superverse. In the superverse, digital objects will live alongside atoms in a shared programmable space
attached firmly to our planet. We will experience this world together like multiplayer and we will do that with all our senses and I mean all our senses. Technologies enabling us to see, hear and touch digital objects are commonplace in the XR industry.
But even smelling and tasting the digital worlds is possible should we want that. There are actual technologies for that.
Digital objects in the superverse can behave and be manipulated by us as if they are real physical things, even though they have no mass or substance. But since they are digital, they need not be constrained by the laws of physics. The possibilities are limited only by our imaginations.
I don't know if you have seen this demo of HoloLens 2, but this woman is actually playing a melody on that virtual piano based on the sensors that are able to track each of her fingers.
Another point is we will also co-inhabit this machine-readable superverse
with our AIs, our robots, our drones, autonomous vehicles and IoT devices. So it's way more than just AR. I even propose that we in the future bring our pets into the superverse.
One day we might be able to play catch with our dogs using virtual objects that has virtual smells and that might try to run away from the dog as the dog approaches. This might sound like the science fiction we have been fed for decades, but the reason that this technology is about to become a reality now
is because of a number of recent advances in a wide range of technologies that need to work together. So the most fundamental technologies are computer vision, machine learning, compute power, capacity to store and interpret vast amounts of data,
miniaturization of sensors and optical devices, low latency, high speed networking. Google is already providing an experimental glimpse of the superverse with their new version of Google Maps where local guides or pixel phone owners can go into AR modes and navigate.
And they achieve this using their visual positioning service which offers position and orientation that is way more accurate than GPS and compass. So it's a game changer in spatial computing.
Niantic, Pokemon Go, you probably heard about that. They are demonstrating prototypes of similar technologies using 5G and edge computing for multi-user AR gaming experiences in the real world. But how will all this new content and apps and services work together?
Let's look at how an index of thematic layers might be the key organizing principle for the superverse. This diagram from our state of the AR cloud report is like a 150 page report created in collaboration by almost 40 people
who volunteered in open AR cloud from our different working groups. So we produced this diagram to convey the structure we envision for an AR cloud ecosystem. At the bottom of the diagram we show a new type of map
that will be the basis of the whole structure. We call this map the reality capture layers. In traditional mapping we might have called them base layers but these are not going to be manually digitized 2D maps but rather rich real-time 3D representations of the world
that are automatically generated based on reality capture technologies and using machine learning algorithms. Here is a closer look at the reality capture layers. We think they can be separated into two main layers and both are directly related to the physical world.
The first is the static reality layer and it will contain parts of reality that change very slowly like the terrain, buildings and roads. It's likely to be this layer that provides us with centimeter accurate geospatial position and orientation of AR devices, drones, autonomous vehicles and robots
by matching what they observed through their cameras and other sensors with the data in this layer. The second one is the real-time reality layer and it contains the dynamic elements such as people, animals, vehicles, robots, drones and movable objects. These layers will be at the core of fusing together the digital and the physical world.
To make sure that spatial computing becomes open and interoperable Open Air Cloud thinks that there needs to be a universal standard for six degrees of freedom position, geographical position and orientation of real and digital objects.
We have named this concept GeoPose. As far as we know there is currently no such standard, maybe someone in the room knows about it, but Scott Simmons in OGC didn't know about it at least.
By the end of this year we hope to have a standards OGC GeoPose working group. A draft charter for the SVG has been published by OGC and it's open for public comment until the 5th of September. The draft has the backing of OGC members like Open Air Cloud, obviously,
the British Ordnance Survey and my day job company, Nurkart. There's certainly been wider interests. There are more people wanting to take part in this. I really hope we will succeed in this endeavor. It is my sincere belief that GeoPose will be as fundamental to spatial computing
as the URL, Universal Resource Locator, is for the web. And you can quote me on that. Zooming out again, when reality is captured, spatially indexed and made machine readable,
we can now start to paint the world with data. We can now put all sorts of digital content and experiences in the real world and have it become an integral part of our surroundings, even adapting to and interacting with real world moving objects.
In this illustration you see a few examples of thematic layers you can use to organize and annotate spatially indexed content, applications and services. Layers that we can summon at will and combine however we like
to access whatever we may want or need. I hope to have time for some cool examples later. Before I continue, I will make the claim that a fundamental prerequisite for the success of the superverse is that at its very core, it's designed as an open platform, just like the web.
It must be accessible to anyone, everywhere, using any type of device on any hardware platform. So let's make the open superverse the biggest part of the superverse.
But it's not enough to make it open. There are some other things we need to consider. Capturing reality at such an unprecedented detail could lead to a large number of new potential threats to our privacy, freedom, safety and security.
If we are mindless or careless, we could end up letting governments or large corporations know every location, body posture, behavior, social interaction, emotional states and health states of every person, everywhere on the planet.
That would make the superverse into the ultimate digital prison. Open ARCloud will fight fiercely to avoid such a scenario and we have a lot of ideas on how to. On the other end of the spectrum, we know that accidental leaking of national security secrets
just happens without there being sufficient management of that. So we saw that with the GPS-based Strava train exercise application that revealed US military bases in Afghanistan. With the type of reality capture we expect for the superverse,
we could risk much bigger national security incidents. And also, if we are careless about safety when designing this technology, an attacker might hack into our AR device that augments both our vision and our hearing,
maybe other senses as well. And because the world is now machine-readable, the attacker will know exactly when we are near a cliff or if we are near a high-traffic road where a fast-moving bus is about to come by. So imagine the attacker injecting a virtual lion
that roars and jumps towards us. Our instincts will compel us to jump away, straight into our death. It will look like we had an accident or if we committed suicide. It would be the perfect murder and we certainly want to do everything we can
to avoid those kind of scenarios. So technology is a double-edged sword that is becoming sharper and sharper. The potential for good as well as bad is magnified with each new breakthrough.
In a best-case scenario, this technology could help usher in a digital renaissance where humanity could flourish like never before, both culturally and economically. In a worst-case scenario, if the biggest problems remain unsolved, we could start descending towards a digital dark age
no better than some of the scariest episodes of Black Mirror. In Open Air Cloud, we are cautiously optimistic that something close to the best-case scenario is within reach. But this is not going to happen by itself.
We need the contribution of a lot of dedicated people working together. We are actively working with organizations like the XR Safety Initiatives to look into solutions for privacy, safety, and security. To prevent walled gardens and to promote interoperability, we have partnered with the Open Geospatial Consortium,
and we are in dialogue with a number of other relevant standards bodies like the Kronos Group and W3C. But this, my friends, is my challenge to you. In my humble opinion,
what is ultimately the most ideal road towards a technical ecosystem that is beneficial for everyone is to build it all from the bottom up with free open-source software, hardware, and data. This is where all of you fit in.
This is where you can make a dent in the universe and help shape the future of spatial computing. There are a gazillion things the open-source geospatial community can contribute to, but let me give you a couple of examples.
Imagine a reference open-source library that can convert to and from Geopose and the Cartesian XYZ pose. That will be very helpful for AR rendering of geospatial content. Open ARCloud already has a small number of developers who have volunteered to contribute,
so please join them. Also, Open ARCloud intends to start the work on a reference open-source spatial index service. We are considering the support of CTGML and indoorGML, and, of course, we plan to anchor all the objects in the index
to the planet using Geopose. So the next thing there, imagining the state of the current AR technologies, they are very immature. So we have decades of intensive R&D ahead of us before we can hope to realize its full potential.
So inspired by the culture of open-source that has helped drive the big bang of machine learning. We see every other talk here is about machine learning now. So we want to have something similar in the spatial computer sector.
We propose that, inspired by the game engine-based toolchain used for R&D in autonomous vehicles, we build a similar toolchain for ARCloud technologies. So instead of cars, we will simulate AR devices and the performance of ARCloud services in a synthetic world.
We propose that we build this toolchain based on an open-source game engine, like the Godot engine. So actually, I had a personal meeting with the founder of Godot engine. So he said thumbs up for a community to start to build this toolchain.
He himself and his core people, they currently have no time or budget to build this. But hopefully, someone from this community, someone in ARCloud, could start this.
So now, if we have time, let's move into some of the examples that I referred to of how these layers could achieve different things. So the construction layer is the layer where we will architect, plan,
and construct our buildings, cities, and physical infrastructure. This industry is a 10 trillion US dollar marketplace. And I believe that within maybe 10 years, the superverse will unleash billions, if not trillions, of dollars in value creation in this layer alone.
The first features are already on the market. Companies like Trimble, Sightlands, and several others have started to provide ways to bring BIM models directly to the construction sites where they belong. Infrastructure under the ground can be visualized to help avoid costly mistakes and delays. You can show the proposed building on-site to stakeholders
before construction starts to make sure you build what is most valuable for everyone. And during construction, you will be able to verify that things are done according to plan by looking at the 3D models as they compare to the physical construction. If things do not go according to plan on the construction site,
the BIM model can be updated in real time in direct communication with construction engineers who may be even off-site. And it's mediated by a real-time streaming of spatial data to give them situational awareness. So when the construction is complete,
the adjusted BIM model can act as a live digital twin that will be very valuable for maintaining and operating the building in years to come. And now the commerce layer. It is my hope that the commerce layer will provide a great boost to local economies and communities, empowering local businesses and customers alike.
So let me provide you with an example inspired by Tim Berners-Lee's solid project and playing out a few years from now in the superverse. Imagine a specimen of homo-photographers walking around in an unfamiliar city taking pictures. Suddenly a small ding sound is heard to his left
and he turns his head and across the street he sees an indicator over a shop. Amazingly, a used Hasselbrock from a camera from the early 70s is on offer for a bargain price. He's definitely going to check it out. So this matching of seller and potential customer
has been mediated through the superverse, initiated by the personal AI assistant on the user's behalf. So what is radically different in this matching than how ads are matched online today is that the user has full control over his own user profile,
which is stored in a solid pod where it's kept 100% private and encrypted. So the AI assistant has not revealed the name, age, gender, position or anything else
than the particular types of cameras that the user was interested in. So this is a totally flipped model of advertising and this is just one of many new business opportunities that the superverse can unlock. And then you have the art layer
and actually this is my favorite one. I could probably go on for days about it and only scratch the surface. So imagine a town commissioning a group of artists and craftsmen to build a collaborative work of art for a particular location in that town. The project could fuse together any existing art forms
like acting, music, sculpture, painting and animation and make it spatially aware and respond to what people do and the weather and the light conditions. And you can mix in game technology, AI
and any other type of computes. So there will be new art forms that the world has never seen and that people have never created, have never experienced. So I'm gonna just round up with a little bit about Open Air Cloud.
We have a lot of working groups. We are sort of becoming a place where people from around the world can combine their efforts to solve hard challenges in real world spatial computing. We've got some of the world's leading experts from a wide range of fields.
But we're just getting started and we really need more help. We need your help. So join us and help create a better future for all of humanity. And I encourage you to go to openairclouds.org for more information. You can download our State of the Air Cloud report. You can go and watch videos on YouTube from our events.
We had an event in Santa Clara in Silicon Valley which is all recorded and we're gonna have a new event in Munich in October. It's not announced yet. But there's a lot of activity going on.
So with that I will open for if there are any questions. Thank you.