We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Augmented reality with smart glasses

00:00

Formal Metadata

Title
Augmented reality with smart glasses
Title of Series
Number of Parts
46
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Smart glass technologies have matured to a point where wearers are able to get a true augmented reality viewing experience. To provide an intuitive user experience, offering different types of user interaction methods with smart glasses is key. This talk explores some of the available options and how you can implement it with Sony SmartEyeglass.
18
Software developerGoodness of fitProduct (business)Shared memoryTheory of relativityData miningAreaAugmented realityLecture/Conference
Bit error rateAugmented realityPresentation of a groupCore dumpMetropolitan area networkTouchscreenObject (grammar)DigitizingReal numberBridging (networking)InformationDistanceLecture/ConferenceMeeting/Interview
Bridging (networking)Object (grammar)Point (geometry)Augmented realityReal numberFormal grammarLecture/Conference
SpacetimeObject (grammar)Virtual realityCharacteristic polynomialComputer-generated imageryReal numberInteractive televisionObject (grammar)InformationPoint (geometry)Characteristic polynomialVirtual realityQuicksortMedical imagingMultiplication signComputer animation
SpacetimeVirtual realityLecture/Conference
QuicksortMobile WebSurfaceInternet service providerGoodness of fitMultiplication signInteractive televisionMetropolitan area networkImmersion (album)Type theoryMoment (mathematics)Right angle
Computer-generated imageryPlastikkarteOperator (mathematics)Endliche ModelltheorieEnterprise architectureForm factor (electronics)Error messageSocial class
Immersion (album)Goodness of fitEnterprise architectureAreaQuicksortCASE <Informatik>PlastikkartePoint (geometry)Focus (optics)Social classLecture/Conference
Immersion (album)BitMoment (mathematics)Point (geometry)QuicksortInformationElectronic visual displayTask (computing)Focus (optics)Interactive televisionComputer animation
Immersion (album)BitCylinder (geometry)Task (computing)InformationInteractive televisionComputer animation
Immersion (album)StatisticsRight angleInformationMeeting/InterviewComputer animation
User interfaceFunction (mathematics)outputSpeech synthesisExecution unitControl flowSocial classPlastikkarteFeedbackGoodness of fitForm (programming)Execution unitIntegrated development environmentInteractive televisionSpeech synthesisObject (grammar)outputSmartphoneNatural languageFlagInformationGame controllerCASE <Informatik>Point (geometry)Lecture/Conference
Goodness of fitUser interfaceMechanism designIntegrated development environmentLecture/Conference
InferenceAmsterdam Ordnance DatumBit error ratePlastikkarteUser interfaceLecture/Conference
View (database)Electronic visual displayTransmissionskoeffizientSoftware developerInheritance (object-oriented programming)OpticsGame controllerApproximationUser interfaceInheritance (object-oriented programming)Object (grammar)Android (robot)SmartphoneInformationCompass (drafting)Flow separationReal numberElectronic visual displayDigitizingRight angleCASE <Informatik>MetreLecture/ConferenceMeeting/Interview
Touch typingElectric currentControl flowInteractive televisionPlastikkarteUniform resource locatorCuboidElectronic visual displayCircleGame controllerComputer animation
Electric currentControl flowTouch typingCAN busGame controllerCuboidComputer hardwareUser interfaceLecture/Conference
Object (grammar)User interfaceObject (grammar)Mechanism designInteractive televisionCartesian coordinate systemElectronic visual displayDemo (music)Network topologyComputer configurationPlastikkarteQuicksortComputer animation
RepetitionObject (grammar)User interfaceMoment (mathematics)Digital photographyAugmented realitySemiconductor memorySoftware developerLecture/Conference
Open setSoftware developerComputer hardwarePlastikkartePosition operatorGame theoryLecture/Conference
PlastikkarteCartesian coordinate systemPhysical systemMultiplicationLevel (video gaming)PlastikkarteCuboidPower (physics)BefehlsprozessorProcess (computing)Social classArtistic renderingRaster graphicsLecture/Conference
Cartesian coordinate systemSoftware developerFunctional (mathematics)Connected spaceInternetworkingAndroid (robot)Communications protocolData structureProjective planeSinc functionPlastikkarteLecture/Conference
PlastikkarteRepetitionBit error rateComputer architecturePlastikkarteSlide rulePlanningEndliche ModelltheorieCartesian coordinate systemFile viewerLecture/Conference
Data bufferPlastikkarteLibrary (computing)Electronic visual displayBitDynamical systemType theoryRaster graphicsMultiplication signComputer animation
Raster graphicsCartesian coordinate systemPlastikkarteMultiplication signVolumenvisualisierungBuffer solutionProcess (computing)SurfaceArtistic renderingPixelSmartphoneFrame problemTouchscreenLecture/ConferenceMeeting/Interview
Data bufferPlastikkarteVolumenvisualisierungOperator (mathematics)Frame problemCartesian coordinate systemoutputLevel (video gaming)View (database)SurfaceArtistic renderingGame theoryObject (grammar)Android (robot)Instance (computer science)ImplementationPolygon meshLibrary (computing)Data structureComputer fileEndliche ModelltheorieLecture/ConferenceComputer animation
Endliche ModelltheorieRaster graphicsPlastikkarteMobile appVolumenvisualisierungMereologyMultiplication signAugmented realityVideo trackingCartesian coordinate systemComplete metric spaceLecture/Conference
PlastikkarteEndliche ModelltheorieStreaming mediaVideo trackingObject (grammar)Social classDataflowOpticsTrailPattern recognitionLecture/Conference
Data modelScale (map)OvalData storage deviceOpen sourcePoint (geometry)Cartesian coordinate systemView (database)Pattern recognitionImplementationMeeting/Interview
RotationInteractive televisionFunctional (mathematics)Object (grammar)Operator (mathematics)Disk read-and-write headTrailSocial classPlastikkarteAugmented realityMeeting/InterviewLecture/Conference
Virtual realityDisk read-and-write headSoftware developerAugmented realityLecture/ConferenceMeeting/Interview
AreaVirtual realitySimilarity (geometry)Cartesian coordinate systemGoodness of fitAugmented realityDisk read-and-write headLecture/Conference
User interfaceInteractive televisionPlastikkarteEvent horizonSocial classMeeting/InterviewComputer animation
BitPlastikkarteCuboidLecture/Conference
Game controllerComplete metric spaceCuboidCartesian coordinate systemLecture/Conference
User interfaceAugmented realityAreaInteractive televisionPoint (geometry)Lecture/Conference
Game controllerSoftware developerInternet der DingeAreaInteractive televisionPlastikkarteBitPoint (geometry)Nichtlineares GleichungssystemBuildingVideoconferencing
Internet der DingeSoftwareAreaConnected spaceFamilyCartesian coordinate systemAndroid (robot)Game controllerLibrary (computing)PlastikkarteLecture/Conference
Cartesian coordinate systemAugmented realitySet (mathematics)Lecture/Conference
Virtual realityDifferent (Kate Ryan album)Keyboard shortcutAugmented realitySoftware developerLogic gateImplementationLecture/Conference
Video trackingContext awarenessAreaTrailComputer animation
Augmented realityKey (cryptography)Cartesian coordinate systemPoint (geometry)Context awarenessoutputInformationIntegrated development environmentRadical (chemistry)Lecture/Conference
Numerical integrationContext awarenessArmCartesian coordinate systemTraffic reportingCombinational logicMusical ensemblePlastikkarteMetropolitan area networkOcean currentLecture/Conference
Software development kitSoftware developerEmulatorAndroid (robot)WebsiteLecture/ConferenceMeeting/Interview
Software developerLemma (mathematics)CodeCartesian coordinate systemSampling (statistics)Data storage deviceEmulatorInstallation artSoftware development kitPurchasingAreaSocial classPlastikkarteLecture/Conference
AreaImplementationMoment (mathematics)Online helpPlastikkarteSocial classClique-widthLecture/Conference
Cartesian coordinate systemDemo (music)Endliche ModelltheorieGame theoryLecture/Conference
Transcript: English(auto-generated)
All right, thank you. Hello, everyone. It's nice to be here in Berlin. Great weather, good company. So my name is Wime Yeo. I work at the Sony Developer Relations team. Today, I'm also doing a talk with a colleague of mine,
Ahmed. And yeah, both of us work at Sony Developer Relations. We support partner developers to integrate their solutions with Sony products. So last year, we have actually been working with the Sony Smart Eyeglass. And today's talk is really to share and also
to inspire you to do something in this area. So what is augmented reality? I mean, you've seen Iron Man. And this is a screen dump that you usually see on the movie. And augmented reality really is the easiest way
to explain it in a presentation, is to actually look at this, where you can see the bridge, which is the real world object. And then you have all these superimposed digital objects or pieces of information that tells Iron Man where
the missile is going and how much distance he has towards the bridge. And I mean, the key points here would be that the real objects are the bridge and the garage tunnel that you can see. And the digital objects are actually superimposed.
So augmented reality, I mean, there is also a formal definition from academia. And this guy called Professor Ronald Azuma, his definition is actually the most widely accepted definition
and also a digestible one, I would say. So yeah, there's three characteristics. One is that AR is actually combining both real objects and virtual images into one. And you see both at the same time. Another thing is that it's interactive.
So you can sort of interact with the virtual objects. Either you can sort of click on it to actually see more information about it, or it actually sort of pops up extra information when something changes. Another point is that it is registered in 3D.
So the virtual objects actually look like they are fixed in space. So AR actually supplements reality rather than actually overriding it or replacing it. That's virtual reality if you don't see the real world of it. And yeah, so yes, OK, sorry.
So I think most of us have experienced AR today via the mobile phone, which is a handheld AR experience, very similar to what you see here in the pictures. And there's two advantages that we think this has.
In the sense that one is it is not so immersive, because you're actually seeing and experiencing the world through a mobile phone. I think you know that quite well when you actually sort of spend too much time trying to take pictures and not really actually being in the moment of the event.
So that is one disadvantage that we see with the mobile handheld AR experience. The other one is that it is not really hands-free, because you need both hands to either hold a mobile phone or one hand to hold a mobile phone and the other hand to sort of interact with the surface.
So we're not really there yet. I mean, with the mobile handheld AR experience, with giving that sort of immersive experience that you see in the Iron Man movies, but at least the technology to make that type of experience
is kind of getting commercialized more enough to bring us closer to that target. So if the mobile phone doesn't provide good UX for AR, what then? And this is where we come to where we think that we see that smart glasses are actually
a much better form factor for delivering this kind of AR UX. And there are really actually quite a few models out there of smart glasses that you can buy and actually play around and experiment with it.
So this picture here that you see is actually a picture from one of our partners, APX Labs. They actually provide custom solutions for enterprise customers who want to use AR technology to improve their business operations.
And I mean, they are actually sort of making strides in this area to try to provide a good AR UX for at least the enterprise use case. So when it comes to AR and smart glasses,
I mean, the point is that you get an immersive experience compared to the mobile handheld experience. And now you can sort of focus on being in the moment, seeing what's, focusing on your task at hand.
Because now the information is appearing right in front of your eyes because you are wearing them, the display. And you're not distracted with trying to operate another device to go forward. And the other point is that with smart glasses,
your hands are a little bit more free. Or they can be free. It depends on how you design a user interaction for it. But at least you have a possibility to offer the user to actually have their hands a bit more
free than just holding the device itself. So these two screenshots up here, they are actually, again, from APX Labs, our partner, where the first one, you can see that it's actually a technician trying to go about fixing some cylinders or engine.
And he gets the information about what task he needs to do right in front of his eyes instead of getting distracted by checking his mobile phone or smartwatch or let's say even a paper and a notepad.
So now he gets the information right in front of his eyes to actually know what he needs to do. And the other one here is actually in a hospital setting where a doctor could actually be just going around helping patients or looking after patients and they would get the vital statistics of the patients
in front of the eyes instead of checking paper charts and whatnot. But anyway, I could go on and I would say that it's something that you have to try.
Try it out yourself. And downstairs in the Sony booth, I mean, we have glasses, smart glasses for you to try out. And you can get, I think you will get the picture much better from there. So when it comes to user interfacing possibilities
or user interaction possibilities with smart glass to get a good AR experience, I mean, we have thought about three different possibilities. One is you use sound or voice input. So in this case, you actually speak
and then the user will actually get the information, feedback in the form of speech. The other one is to actually use gestures to actually let's say navigate the UI or to rotate certain objects, the virtual object that you see in the eyes.
And this will be reliant on the sensor data that is either on the device or let's say on the smartphone. A third point, a third possibility is actually to use an accessory, like let's say a control unit. So which one to provide? I mean, you have to design or consider what is your target user.
You know, what kind of environment he's actually operating in. So for example, there is no one method that is best so, you know, that is better than any other. So in a, like for example, in a situation
where let's say your target user is working in a very noisy environment, let's say in an airport, then sound might not be a very good user interface you know, mechanism, but something else like gestures or providing a accessory would be,
you know, could be a much better use, provide a much better user interface experience. So yeah, these are the three possibilities that we have thought about, but it's most likely not all. And you know, so I mean, yeah. The, so for Sony smart eyeless,
what we have done is that we have kind of looked at to see what, how we could actually, you know, create or devise new kinds of user interface possibilities.
And this smart eyeglass actually has provided a true AR experience because you get the information right in front of your eyes. I mean, it's binocular, so you get information in both sides, not just one sided. And the display is really clear so that you can actually see the real world objects very clearly as much as you can see, you know,
the digital objects and it's super thin. So, you know, you don't really, I mean, you don't really know that it's actually smart glasses. You don't, you can't see that, you know, it's actually a special, I mean, a special kind of glasses. So the, it's very light as well as, you know,
so it's very easy to actually wear it and you don't feel eye strain or, you know, strain on your head. And in this case, the glasses work with Android smartphones.
So what is, it supports Android 4.4 and it has several sensors on it, like a thermometer, gyroscope, compass. So, and that you can actually use. But currently, the smart eyeglass UI interaction
is actually via a control box. So here, that circle, that is what the user actually uses to interact with the UI in the smart glasses. The display, I mean, the thing that is on the TV screen is just to actually, you know, to show what the user could actually see
if, let's say, he was standing in this location. It's just, because this location that is in the TV picture is actually somewhere else and what they wanted to do was to actually show that if you are, the user is standing in this location, this is what he would see with the smart glasses. And anyway, the control box has a touchpad
and hardware buttons. And, but we have, through the talks that we have with our partners, we have found out that, you know, it's like, this is perhaps not a really good or intuitive way of interacting with, you know, the user interface
because the control box is, you know, they have to always figure out where to press and where to touch. So we actually did some experiments to actually try to find alternative ways. And these are just demo applications that we have just sort of experimented by ourselves.
One is to actually control the 3D objects using the smart eyeglass sensors. So as I said just now, the glasses have actually sensors like a sound meter, compass, gyroscope. So we are used, one experiment was to actually use these
to actually try to manipulate the objects that you see in the display. The other option was to actually use a smartwatch tree to actually, you know, provide a user interaction kind of mechanism. And now I will get my colleague, Ahmed, to actually tell you more about the details
of these experiments that we have done. And so Ahmed, come on. Hello, before I proceed any further, I wanted to say I'm really excited to be here today.
I got opportunity to meet a lot of skilled augmented reality developers today here. So I actually want to take a photo of this moment as a memory before proceeding further. Thank you so much. So let me introduce you myself.
I was a full-stack developer, serving as a freelancer before joining Sony. And I joined Sony around a year ago and I was very lucky to be working on the smart eyeglass because I'm really interested in devices and open hardware movement in general.
So this position gave me opportunity to play with this really exciting technology beforehand, before it's out already. And I managed to do some experiments and I would like to try to share those experiences with you. So before going down to that road
and sharing my experience, I would like to point out how the smart eyeglass works in the system level, in the background. So actually smart eyeglass applications are on the phone. This allows a smart eyeglass box and the general glass to be really lightweight.
And it also allows us to save a lot of battery life because it leverages the CPU power on the phone and all the processing is done on the phone. So you run the application on the phone side and once the rendering of the application is done,
it sends the bitmap to the glass. And aside from these benefits, it also provides the developers one other aspect of advantage because when you have the application running on the phone, you don't need to handle any kind of communication protocols
like you would do with Android Wear, for example. For Android Wear, you have to create a node and then you have to communicate over that. But smart eyeglass, since application is already on the phone, you can directly leverage all phone functions such as GPS, internet connection, et cetera.
And also this means the application structure itself is an Android project. So we are all familiar with that and it really helps us get started with it really quickly. So I believe we all got pretty much familiar with the smart eyeglass architecture. And I would like to go onto next slide
and I would like to talk about my first application for smart eyeglass. And it was actually a 3D model viewer. It really sounds like it doesn't serve to any specific purpose. It's more like a general purpose application. My plan on developing this was to actually
create some library that would allow me to use OpenGL and 3D graphics on smart eyeglass because the SDK itself only provides you a way to display bitmaps, nothing 3D. But I want to show some exciting 3D dynamic graphics.
And to do that you need some kind of OpenGL or some other type of rendering engine. So that was a bit of a challenge to get the OpenGL working with the smart eyeglass SDK. So I started creating this library and to do this I have OpenGL in one end rendering the frame
but in the other end I need a bitmap. And since smart eyeglass applications run on the background on the phone, I do not want any kind of surface weave, any visible layout. That would mean that I would have to keep
the smart phone screen on always, which I don't want really. So I created a pixel buffer handler for OpenGL which handles all the initialization of the OpenGL process and the rendering and so on. And at the end it gives you the bitmap to use on the smart eyeglass. So you can ask for another render frame
whenever you need it from the pixel buffer and it gives you a dynamic updated frame. So you can, on the OpenGL side you have your renderer and you can do your manipulations, mathematical operations and everything.
And on the smart eyeglass application you request for an updated frame. It just gives you that, it's as simple as that. And you see the structure here and I tried to keep the library really consistent with the usual Android GL surface view. So the library I developed asks you for a GL surface view renderer instance
which is widely used for Android basically for OpenGL implementations. So I think it may work with many Android game engines already, I haven't tried. But I believe it will work. I directly use the low level OpenGL for rendering.
So once you get your GL surface renderer done, let's say for example what I did was a model renderer. It parses a mesh object file and renders it in OpenGL view. And you set your renderer to pixel buffer and on the smart eyeglass app you get the bitmap
and it shows up like this. As you see here, you have the 3D model. So it gives you a really immersive augmented reality experience I would say. One part still missing is object tracking which I'm planning to do maybe later on when I have time.
But I think once that is already also done it will be like a complete augmented reality experience. But for now it's only a model rotated by nine axis sensors on the glass. So you see a model and you can look around the model but it will shift because I don't have object tracking.
But it's actually possible because we have JPEGs on the smart eyeglass camera so you can keep listening for the JPEG stream and you can do some kind of optical flow tracking so you can keep the object in place. So that's also possible because in the previous hackathons we attended
we've seen some teams worked on OpenCV with smart eyeglass and they managed to do face tracking and face recognition. So from the SDK and the performance point of view it really works and it just needs some extra implementation to get to that point
which I'm looking forward to do. And other than that I would like to point out actually this application is open source so if you're also interested in that kind of application you can go on and do that before I do and you can publish it on Play Store and you can even sell it because it's a BST licensed
so it's okay to share it and sell it directly with that license. And to give you a deeper understanding how it works so on the smart eyeglass activity I'm getting sensor data and I'm running it through some mathematical operations
and sensor data operation function. Then I'm on the OpenGL side I'm just rotating the object depending on the rotations from the sensor. So this is one way to handle user interactions in augmented reality through head tracking and it's actually not so unfamiliar to us
if you've been following virtual reality development in the last two years or something they really depend on head tracking a lot and it's one way to really offer immersive experience and we see that there are so many different solutions
already for virtual reality but not so much for augmented reality and I see this as a really good opportunity for people who want to drive innovation and be a pioneer in this area because augmented reality is coming back from the virtual reality but it also requires a similar solutions
and similar applications and they already have inspiration from virtual reality. So you can actually directly use head tracking sensors and apply these to your application but there I believe there are other alternative solutions such as pointing where sensors
or directing the user and so on. So this is one way to handle user interaction with sensors. And one other way I found out was using smartwatch and this came on later on actually after using smart eyeglass at events
and I had to handle this box maybe if you tried downstairs you also realize that I had to take care of this push some buttons and also smart eyeglass and this was a bit of a hindrance for me and I wanted something else something I'm more familiar with and that is smart eyeglass, sorry smartwatch, smartwatches
because when you have this box you are kind of attached to it with the cable and everything it doesn't really offer a nice experience but let's say you have the complete control of your application with a smartwatch then you can just attach this box let's say on your back
and then you have the smartwatch on your hand already and on the smartwatch you also have sensors which you can also facilitate in your application. Anyway, then your hands are more free actually you don't need to push some buttons and you can handle and do some other stuff and since the smartwatch is attached device
to some other joint of your body so you can really feel the freedom when you really experience it. So this was another solution I developed and then we contacted with some of our partners and they also found this very useful and actually they are working on the solutions
based on this and I believe this will increase the immersive and the intuitive experience for augmented reality. So these two are the basic solutions I've managed to try so far but I believe there will be more things
coming for augmented reality user interactions and user experience in general and it's a area that's really fertile to innovation and I think at this point it's really easy to become a pioneer by developing some solutions because everybody's looking into this area we see companies are trying to build
new hardware, new solutions and I think for developers the key takeaway from this emerging technology would be developing for user experience solutions and bringing interaction solutions into this equation.
And actually maybe you've heard of Internet of Things and I thought about this a bit and I thought well we have quite so many devices now that we are using smart watches and smart eyeglasses maybe it's actually called the body area network
and I prefer to call it connection of things kind of similar to something we are familiar with which is Internet of Things. So this shows the way the smart watch controller application works. Since we already have the Android we are communicating with the phone and the smart eyeglasses application
already runs on the phone. All you need to do is to get the commands from Android Wear which is done with Google's library. Then once you get the command on the phone it's really easy to apply to your application because you just say like do this, do that and so on. And what I did so far is just by a set of buttons
but even that improved the user experience a lot and I cannot imagine how much further it would take if we implemented those sensors. Imagine like doing some kind of gesture to move in the augmented reality user experience. I think that would be really revolutionary.
And one other thing I want to point out is like if you observe the emergence of virtual reality you see they implement a lot of different accessories like for example Leap Motion, my armband ABC implemented and it really creates that wow experience for the user.
You see like you completely leave all the mouse, keyboard and everything, you just use it with gestures. I think same thing will happen for the augmented reality really soon and I think it just waits for implementation for developers interest.
This was the challenge in user experience for AR and I mentioned that gesture tracking will be really important for this area. And then another thing I would like to point out is aside from sensors and third-party accessories
in all of the smart glasses in the market we have camera if you realize and aside from just taking pictures actually it can be leveraged to develop context of your applications and this is actually a key point in augmented reality because with augmented reality
you should avoid the user input as much as possible. Applications should be able to provide the necessary information without any input from the user. To do that you have to make environment or context aware applications through the camera.
So using camera you can just detect where the user is, where he's passing through. Like for example in Terminator movies when Terminator runs through some place it just automatically draws a pathway automatically without any input. I think that's the kind of context aware solution
the AR is waiting for. And I mentioned the smart watches and I think armbands will also be an integral point in the solution and we see Mu arm band has a really nice combination of muscle sensors and the nine axis sensor which can allow this kind of like in the minority report movie.
We can really do that now with the current technology with smart eyeglass and combine it with smart watch or Mu arm band maybe you can achieve this technology actually. Yeah, so that was pretty much all I can say for the user experience. And if you're interested in developing for smart eyeglass
and you want to get started, take a look at it, how what it is, we have this site and it's really quickly easy to get started and we have a emulator on running on Android. So even without the smart eyeglass you can just download the SDK really quickly and the emulator allows you to install applications
from Play Store or just run sample codes, sample applications that we provide with SDK and you can just compile and try out with the emulator really quickly. I really recommend to do that because as I mentioned earlier it's really open to innovation
and it's really exciting area. And one other thing I would like to mention is the smart eyeglasses actually available for purchase today and this is really exciting because when you look for the smart glass market it's really hard to find the really high transparency display AR glasses
and with smart eyeglass if you try it downstairs you already probably experienced it. Aside from the graphic area you can really see behind and see through and I believe this is an exciting moment for the smart eyeglasses augmented reality that it is publicly available
and I think this will help grow customer base and the demand for the solutions and the implementations in this area. And yeah, I welcome you to our booth downstairs and I will demo 3D model application I made
and we also have some other demos, a fun game that you can try out and we also have other devices you can also try out. So please welcome and try it out. Thank you. That was all from me.