Supporting Nouveau on the Tegra K1 System-on-chip

Video in TIB AV-Portal: Supporting Nouveau on the Tegra K1 System-on-chip

Formal Metadata

Supporting Nouveau on the Tegra K1 System-on-chip
How NVIDIA became a Nouveau contributor
Alternative Title
Graphics - Tegra
Title of Series
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date
Production Year

Content Metadata

Subject Area
Word Projective plane Bit Physical system
Revision control Graphics processing unit Latent heat Patch (Unix) Core dump Videoconferencing Bit System on a chip Physical system Number Computer architecture Physical system
Revision control Random matrix Whiteboard Benchmark Physical system
Mechanism design Software Client (computing) System on a chip Freeware Traffic reporting Physical system
Slide rule Multiplication sign Source code Right angle Physical system
Video game Physical system
Physical system
Physical system
Point (geometry) Slide rule Context awareness Video card Open source State of matter View (database) Execution unit Device driver Mereology Virtual memory Semiconductor memory Computer hardware Computer architecture Interface (computing) Memory management Bit Lattice (order) Instance (computer science) Database normalization Word Process (computing) Kernel (computing) Buffer solution Phase transition Right angle Physical system Spacetime Row (database)
Meta element Kepler conjecture User interface Code State of matter Modal logic 1 (number) Set (mathematics) Parameter (computer programming) Mereology Fitness function Computer programming Expected value Mathematics Graphical user interface Strategy game Semiconductor memory Different (Kate Ryan album) Core dump Bus (computing) Videoconferencing System on a chip Multiplication Physical system Social class Area Link (knot theory) Electric generator Seitentabelle Mapping Shared memory Data storage device Internet service provider Menu (computing) Bit Virtualization Instance (computer science) Control flow Translation (relic) Sequence Wave Befehlsprozessor Process (computing) Configuration space Right angle Physical system Spacetime Web page Point (geometry) Expression Functional (mathematics) Video card Observational study Line (geometry) Device driver Translation (relic) Process capability index Rule of inference Latent heat Crash (computing) Operator (mathematics) Software Loop (music) Address space Computing platform Wireless LAN Form (programming) Condition number Domain name Standard deviation Dependent and independent variables Graph (mathematics) Information Weight Expression Civil engineering Plastikkarte Line (geometry) Inclusion map Voting Network topology Internet forum Gravitation Bus (computing) Address space
Point (geometry) Kepler conjecture Multiplication sign View (database) Robot Mobile Web Execution unit ACID Translation (relic) Mereology Food energy Crash (computing) Aeroelasticity Semiconductor memory Computer configuration Videoconferencing Speichermodell Endliche Modelltheorie Physical system Area Electronic mailing list Plastikkarte Sound effect Translation (relic) Demoscene Recurrence relation Befehlsprozessor Buffer solution Order (biology) Right angle Physical system Spacetime Address space
Axiom of choice Kepler conjecture Code Decision theory Multiplication sign Materialization (paranormal) Mereology Food energy Medical imaging Mathematics Semiconductor memory Different (Kate Ryan album) Befehlsprozessor Videoconferencing Negative number Source code Flag Speichermodell Convex set Endliche Modelltheorie Information security Physical system Area Arm Mapping Block (periodic table) Constructor (object-oriented programming) Electronic mailing list Bit Perturbation theory Variable (mathematics) Flow separation Benchmark Degree (graph theory) Curvature Data management Befehlsprozessor Process (computing) Buffer solution Order (biology) Summierbarkeit Figurate number Arithmetic progression Physical system Spacetime Directed graph Web page Point (geometry) Trail Mobile Web Motion capture Device driver Drop (liquid) Field (computer science) 2 (number) Attribute grammar Latent heat Permanent Energy level Theorem Graph (mathematics) Projective plane Memory management Word Kernel (computing) Personal digital assistant Speech synthesis Bijection Object (grammar)
Laptop Game controller Video card Touchscreen State of matter Data storage device Device driver Mereology Neuroinformatik Arithmetic mean Different (Kate Ryan album) Befehlsprozessor Buffer solution Source code Electronic visual display Utility software Codierung <Programmierung> Figurate number Sanitary sewer Physical system God Physical system
Presentation of a group Group action Context awareness Code View (database) Multiplication sign Decision theory Execution unit Function (mathematics) Replication (computing) Mathematics CAN bus Semiconductor memory Computer configuration Hypermedia Electronic visual display Maxwell's equations Cuboid Software framework System on a chip Resource allocation Information security Touchscreen Cohen's kappa Linear regression Closed set Internet service provider Sound effect Bit Data management Process (computing) Chain Buffer solution Order (biology) Volumenvisualisierung Triangle Right angle Quicksort Whiteboard Figurate number Physical system Writing Asynchronous Transfer Mode Reverse engineering Point (geometry) Game controller Overhead (computing) Open source Patch (Unix) Motion capture Device driver Streaming media Rule of inference Theory Read-only memory Term (mathematics) Operator (mathematics) Computer hardware Energy level Address space Form (programming) Noise (electronics) Distribution (mathematics) Graph (mathematics) Weight Planning Line (geometry) Cartesian coordinate system Frame problem Cache (computing) Word Kernel (computing) Software Personal digital assistant Network topology Video game
In the end of the all right along and audible to to everybody In the fact that OK I should have have a cold so of my best and so Hind i in on from India it's great to be here today also some I'm just going to you to go through some of the challenges that we had in our working to to integrate Q 1 which is a word that I and it's a little bit how we are came to become and contributors to the new projects so I just a bit of of
the story so far In 240 last year we release of the Iraqi 1 system on which is a non based ancestry comes which to into versions for 2 to the and 64 bits of core of a number of its specific that it is the 1st the SOC from and video that uses the same GPU architecture as this stub GPU so before before came 1 we're using our dedicated to the architecture for for mobile which was to force so that you have to 1 we just took Keppler and put it up into into this was this was I actually the same GPU as what you find on your on your this topographic the and while interesting fact is that Annual supports get stuff well so we just you say to see how little hide it would be to get to work on on on will and the 2000 14 February 1st we ate we actually started sending patches to of the of the novel guys was the price of some people and well interestingly it was just just 1 year before and so for something to show you a little bit where we are today 1 year later so here this
summer so this is our jets and take 1
it's along and on the board with with k 1 inside to be version and well it's very well supported upstream it's so it's really 1 of the best supported this OSI Board of and just because if you have a device referring to compile a to but you would also also works Our it's it's wrist work on and researchers it this is
where the Western of the size the running on top of a new book and will show you book of NGO don't your stuff the end of the and you're of the he has to GOES TO benchmarks on the you do so and what you can see that the performance of it's so it has a pretty pretty produce and is 4 of sea the
and i'm thank the so if age on some of the most complex but here so you have some some pretty complex federal running running here the so this is ending of latest SOC running I completely free software graphics stack the and and Western so just mechanism of the from the the on another Julius client here you're going to and from decay amazed occurrences of have pretty
well this and report on you um this is women but we don't we don't have interesting don't have X support but
so I think we're we're we're aiming for a future here OK this and the null of what I think it would be pretty easy to implement using using them off and the rights well-being not not which were just we haven't the spending time but if someone wants to give it a try and I'm sure it's pretty it's pretty trivial in so and go back
to so slides here and
in all of thing I'm having trouble
the the 1 that life have come on
you the it was working just fine this
morning it's into
and I yeah
I mean i . irony just research mentioned in and I should have and the the and their so
it but the 1st
In the 1st stage of the that and that that
but is ImageScript breast-implant on the slide so 1st elected above credits because there's been many people involved in this article and unions within our work in these either by submitting patches tunable or giving advice on how to to to to do the porting because so we have plenty of to questions of the and of course we we have to to thank the whole novel community has been absolutely fantastic to work with this and has been very supportive towards our efforts so the other this talk of the argued very quick overview of of GK to wage get 2 ways which of you you have the of the 1 so we're going but by this name and that and it's going to be very very high level of explain how we did the new will bring up on you want the 1st part will be a little bit more interesting has a locomotives some of the challenges that met with memory management I the some motorists challenges that we address some of them are still open questions so it would be nice to have on the European Union because well and I'm going to talk about the engines layout on integral which is of particular especially when you compare it to find to address the GPU that is very very quick word on user space and so it's going to be quick because this is the work so 1st agitate wage getaway is so as I explained that temperature view of the state of the art of giving that supports GL 4 point 3 G of this 3 I could ties unified tutors and what's interesting for us for this talk is that of a GPU is basically neutralized for a per process so it's process but users but but makes use of a GPU it gets its own our some context including the state of the engines which you can switch from process to process including a virtual memory space that allows us to ensure that a process cannot to read memory that belongs to it using the GPU which some of the older and more about the of of you and well classically wreckage over meetings using a push before final phase of the space right so the stuff comments it would like which you to run into a buffer this is the 1st to to to evolve with forever to hardware and 100 basically because it's command execute them and are eventually can notice I want the job is finished using the so all new row is also as you probably know the the open source of driver for and the judge abused the open source and an increase of her support of respect supports all the cool interfaces and the kernel during the famous a on which we ended up over the edge of the desert the it's so while what was really what you really impressive at 1st is that it supports and don't abuse from 1998 to the latest marks would use but for the for the last year so it's it's really really person unit the pretty nice and PRT architecture about the new is extremely modular and the way I way support and a new GPU is about you basically make an assembly of all of the and you and of device drivers but will support for instance on it is if you typically you you have a driver for over 500 of for the graphics engine over for copying genes are encoding and so on and so you put them to put them together and that that defines a GPU for you and the reason for that design is um the is because there's a lot all the redundancy between between GP use the so so this is
basically the gravity condition for Japan to 1 here the so all you have to do is just select a driver for our friend before for the end and you fight for graphics engine and if you look at it some of the some of the drivers strategy 2 way which means that there have been specifically written for all 4 for these GPU but a lot of them actually come from previous generations of GPU the so 4 is a lot of cognitive you can you can you can reuse because it is just the same part as as this GPU and and actually some of well some of these drivers here yeah but are apparently written specifically for different away but misses my graphics like basically all that we had to do to get data which render graphics on the wall during the forgettable was defined of a specific set of classes with support for is unique to it and all the rest we could reuse the functions but we're already knew but we have to actually have to make a couple of changes in this in this function but it's already written out 3 notably things on the other hand you have some drivers but we have the rights of the French which is a lot more this is underwriter to that support but as we talking support the it was this 1 is bit around the so the larger but this is this is real This is not the majority of these muscle I'm weaponry surprised to see how our how well of the up from the from the work here it so if you want to support you get to hear that 2 things that you need to do 1st is binding which engines and so that we can we can use 4 to get to where and writing the ones that are missing of additional and on more operation simply owing a will to work on on the graph because there is a very very different this from from what was used to work with coming 1 the 1st differences is that new has a very strong I knew will had story of very strong expectancy to run on to have a GPU on of behind of a PCI bus express origin in point uh weights is not what's which is not where GPU isn't there on the other hand you is is is connected to the and give us directly it's of the requirement to 2 of the devices so we did you just don't have a BCI bus and when the Woolworth was trying to do some space this very specific stuff like BCI provides the I arranged for a GP registers when it's all of our area and so a has both obsolete function assignment page but you can use to map system memory which to you so whenever you you might states they didn't node crash and so what we had to do was to extract a bus and add platform the support and if you work versus the we were looking we will we from devices which is above studies for something that doesn't have about and is not discoverable so instead of writing I O addresses using PCA I just want them using the device tree and replaced the with assignment and we're very common function for the MAP II which works also for this yet so all afternoon domains which score sponsor about 350 line of code change just to support that from the rice and 300 more to probe a GPU from from the street not not that much work you can instances new rules from the from from the device tree and just seek seed crushed it a bit later because it will find the video whites which is also something that is missing there have been a delays provides a lot of useful information on on this subject in the in page tables for the and also performs some of the the core initialization so but was not done and Russell here is not well there's only 1 thing you can do just to obstructed in and out by the announcement where to get this information is stable and perform the necessary initialization right from the driver instead of expecting of device to to do it so in almost all forms we had reviews things that people were expecting to find but but could not find and the main annoyance in retrospect was that there does not have there and the and share on the ravaging you is competing with other devices on the SOC for system more the wave or just on memory 4 pages so used directly in the the line of the rosemary contrary doesn't have any of you do memory of its own as Azeris has issued from sequences so this is so that the sort of
stuff here are likely to explain the end this is how address translation works on on the desktop coverage of and so top line is this you virtual address which you move which is part for process which is which is how you implement the more protection the process of responses in which you can map at page granularity so typically 4 K memory pages into a CPU physical space so what are the obvious physical space is directly mapped to the system runs and this is how we provide them with you to you're allowed to to a program and the GPU also has as before original space of his own to make sure that give scan of RGB processes cultured at each other's memory so about memory that the GPU is typically going to use is but there and the memory that is only in the on on the GPU card so all the same it's going to lose its thing to have mapping to this to this around and it can can you sometimes need to share memory between the CPU and GPU so GPU can also have mappings to system in where this standard to have to cross the PCI bus and it's it's lower because the systems to be store and you have to cost the cost of so for GPU is I'm going to try to maximize using by using this is your and all the same as if you also has a way to read the GPU memory by using a small our small aperture but this I think 128 about some almost cards and I was going and so is my is is just and it is a cycle small and new to a small due to to do with the neurons again that pages to barman you can map the CPU virtual address space to this bar area and you can read and write to find to which GPU when and when you do so you do so by using the GPU bath and so you have this configuration you had to be of the the remedies phosphogen pubertal for this new system parameters phosphorus is useful for GPU and no voting is uh is is
built on on on this model um when you when you look at buffers you here is we have to target again decide this buffer should be In in video memory orders the 1st being system memory and of all of the will an entity in which is this 1 which the behind the scenes in this are going to try to maintain this this requirement you can move buffers from 1 place to another at unit 1st assumed to export a buffer using our prime DNA that you have to been system more but but basically once you once what when you look at a buffet just decide on to be you know around distances from and it was very good 1 very nice thing that that you have a list of the view that some the court recurrency between the CPU and the GPU is is maintained automatically most of the time meaning that if you know if the CPU wants to read into memory good just just just so sorry through the bar which uses the GPU vastly give you crashes and so you you know what you're going to to be instinctive you and when the GPU it's system memory it goes so using the BCI and BCI snoops the accesses of the GPU is doing and will automatically I invalidate or flush CPU caches so currency would be maintained and automatically so this is a very nice thing to have on on this subject on on data on the
overhead and things are quite different so on this if you wrote was a new partner nothing really changes but not this but you have to know the the memory any and the GPU is going to to use the system or directly going for this purpose the had an it's nuts and an is not not a problem there are some some some older cards which so with with decides that they and adjusting to allocate the last of 256 megabytes for full for GP usage and and energy you can use this area of memory and an acid you cannot use it but this is not how it works it pretty it's really 10 access any part and part of is the moon use of other and its effects and 1 thing that you can see in the old at 1st sight is that you still have an I mean you but adds an optional 2nd layer of translations you can have 1 translation on the GPU ritual space which will will again from the ion in you and it's as a system right so I will explain why why we have this other I minus point we still have we still have a bot what's only allows us to access the system again but again using the GPU that the
so in the end so not having are they this little memory means basically that all the allegations adhering to do I ring to be in a in system run so that it still have sense to have this choice of target between between verum and system memory in a new book that's uh but it's the 1st problem that we need to we need to solve how a manager fact but we don't have this theorem device but new expects so much the 2nd problem is in hot make use of his desire women you but we have we have is an an extra layer for construction what what is it for and how does it suddenly make it fit in well and 3rd problem and the biggest problem of all is that on degree don't have this implicit coherence between the signal energy you you have to flesh and invited you catch yourself which is something that you will never did before the in the so diverse how do it without having a memory what we we try different approaches of 1st we just decided not to put try to emulate on of Iran device uh but frequently realized that this would lead to very suboptimal the memory management because the the the driver Woods with think this these buffers and around but I want to export it I have to move it to system memory and physically of before from from system a to system memory and makes included more of more less optimal 1 that and it should be and the another the thing that you tried and true which is 1 we we decided to be with is that we can just additional 2 but there are some things are interviews but don't have your model and which do old delegations system memory and and the word for this requires more changes to the code but is it has an advantage to be very accurate and in and to prevent this I needed moved from from from from happening so let's say that the decision we on but we decided to so to solve this problem the next we talk about this island right so unexplained introduces a 2nd layer a 2nd level of androgens hybridization a white light is that especially since but if you already has its own and in and and can therefore I'm flat text actual well it's can do about once budget you and you we set up but in order to set up the and you you already need lots of memory but needs to be part of a continuous and was it's not too hard to find contiguous blocks of memory into into video memory but is exclusively dedicated dedicated to to to the GPU it's much harder to do so in in this is the memory that can that is used by basically everybody else and can be core fragmented much more much more quickly so when use 1 use of the I mean it is 2 2 methods to be to the data to be you be the illusion that but it has lots of contiguous memory available why doesn't get materials using the the figure in the of the of of suboptimal and and there's a 2nd advantage to having In the irony and which is that's that predicting anyways is not served good at that around um that's page misses if you if you need to evidently if you if you need to to fetch the perturbation to into would still be it's much more closely than the and the images is 1 about so if we can flatten GPU objects the on or in the ion a new visual space we can use lots pages on the GPU and you and have a lot of much less this is what we would have if we had to to merge it to to map every page individually so it's a so it's also here for for performance reasons so that the CPU GPU currency issue so this is the so the biggest issue about on but we had that sheet on on this stuff it's handled transparently that you have something on so well for for most of the of the of the buffer all the project is not that much of the deal because there's some on some books interval in knowing which we know that and now is a good time to validate the all Due to synchronize this buffers between between the city you so in the general case if it's not that much work we can solve it
using the beginning and the but rest some objects from which we don't want to do discussion in which we don't want to at least the sink 1 of America's fences so offenses are video small counter which the GPU can and to which the did you can write a value usually interested i after it has performed a 2nd job in missiles was if you to keep track of the progress of a graph drop so when the it you see that offense has which due to this value it knows that you can use this buffer and they give it to some over engine or displayed on point in so if you have to if you have to invalidate the of a gaseous every time you try to read this sense it's going to be to very costly so for this for this use case we're very 1 of the other variables in to read directly from memory by basing the catch we have a simple and would GB fifos GP fifos are um the list of which buffers list of graphics of GPU jobs but we want to do to run and are of the same just submit a new job on you afford to GPU we write it's interesting to go to a you FIFO and asked for the GPU to 2 2 processes and we don't want to sink to belong to flush of the again our so for this for this kind of objects which just a new flag the that no let's not specifies that we we want to have a coherent a mapping between secure GPU so well well problem solved accepted that arms of makes things a little bit more difficult and that's a really it's hard to explain here but an arm has specificity which is that you cannot smoke a given memory page Our several times with different attributes you cannot have a memory page that is not catch at so somewhere and mapped them cash In the 1st in in so in some of our ASR over convex for some some of our area of the of the outer space and I have a reason why this is a big problem is that the kernel already has that belong and mapping like the of lower part of the memory is is mapped one-to-one are into into into orders based on their of his 1st segment 160 megabytes of of from Adam about wake them up with cast attributes should I want to use this obvious this is the more area for Princess yeah we're going our interval so we answers negation stays the behavior is undefined and the 1st you tried to just you know what and then we realized that we had some very adverse strange issues like a generic to benchmark before it would have worked fine most of the time and then would start turning 1st off on known your stand it would freeze for 10 seconds every at every now and then and easily understand underside always sum to switch to undefined behavior of this this that's so it was to be very strange and we we finally to down to 2 this so true so we cannot bring that this memory we've we've investor should be talk and how can we do I think it would perhaps you ways to solve this problem 1st is to use to do that for all 4 4 is for this object so we use about just now the objective the art and the access it we let the CPU excessive vomiting but which we use the GPU back and who are preserved coherence a promise that the body is already relatively scarce resources and they have a 128 the bites of it on while it takes a lot of sense in the field of human speech by still today a it's a it's a finite resource another solution is to allocate the current before it is in the making I so intimately I do me demand of coherent hello sold this is this is shown on by aid transparently remapping the overall fixing Palomar mapping so what you can of success excessive and catch but once again you might if you refuse it if you use it you must be true at this hour between men and capture being incoming and here that you will end up with a permanent kernel mapping on on the subjects were it's nights the of the new or not of most of offences are typically based fast to use space so you have to have a useful for for them but so as at this point we we have not no better solution than 11 is to so enough with memory as
so clearly it's about the engines they are so but the 2nd interesting thing a month and if you take this stuff God on you have you have plenty of things you'd you'd you like you have to have a graphics engine which stores all pretty stuff you have display controllers that sense since the 1st to your screen I have utility decoders doing encoder copying genes are you have your and that is also handled all always all the spots are handled by the same before which is that once you you renderer you're in their own about 4 using the graphics engine you want to display it if it was an writer sewers nothing is there that just just make the driver passes through the and and just work on the the we only have a graphics engine in the in the GPU IP why because because we already have a different IP that those you encoding but can be supported by the by the fall of the full and that's I don't think it is I hope we will have all the same the display controller is not part of the GPU this but offers a different IP that is you by figure during which is a different driver and of course all although all these people are just while using the system run for the storage and computing for and so on I what is means is that I'm on the run you always going to drive a lot less hotter of and thus on this stuff and it also means that you will have to make buffers cross driver the driver behind the much more often than you would if you want to display but for what you just rendered you have to from new or to to take a of urine so this means a state it's actually pretty close from from from Optimus returns laptops where you have of Benito unintelligible to GPU but drive on display and you have an an energetic which can activate into possible with the means of driver to 4 2 2 to compose or or display them so on on this set up are
you might you must have frightened if if you want to display anything if your screen rendering it's fine begin to start displaying things using using the GPU must have a friend support and but that's actually interesting because this 1 feature about made into a kernel of in 2 years ago which is our our random nodes the separation of remote sitting and GP acceleration the of this and they were the 1 we we have exactly this we have a we have a occurred device but is driven by the red here which is only capable of setting a mode and displaying planes that is basically what Jerry explained 2 hours ago and you have a render render device weights to which you can you can submit our jobs to jump about stuff to render but cannot and displays at all and so old so it it end it means sorry it means that even a simple applications like simple GB application needs to move to understand this and you need to handle this case presentation to right Camus q on on the ground are you you have to change work a little bit because they missed just assumes that regard device also has GP rendering coupled probability is going to send the interview comments to the device but on something works we have to make it open what devices um use trying to share the other end of the frame buffer between of the integral German you will and and and of almost job so it's not it's not that big change but something you have to manage at some level um we action Terry tried to through to to handle that Adam is out of all of them they have to make it possible to just run this obligations from without any any change but it's it's still ongoing I think and does this is this is our problem but I think is not a discussion right morning yeah so all massively we have this so this this suppression likely brings but a couple of over and philosophical questions FIL a have this title publication about that end but captures a buffer from from a capture device using say using a different writer Our passes it to new rule to apply some effects on on on on on the frames using the GPU and then passes it through and to to the encoding hardware to create view or H 2 6 4 stream but she was 7 this while at the same time displaying of a friend on the sea for you have you have this chain that you want to do what you want to implement I this applies to allocate memory for buffers care I am I asking about the kind of them a driver to to look at the various new which would have to look at the before because it's going to the was supposed to what it look this the 1 black and kind of address this issue by by having its own independent memory allocated school I and they have an iron device which again open and from which you can you can now look at the more and then you can import them or into into the different buffers is is 1 thing that's true but that you can do but you still have I used to have a poem but always engines don't necessary have the same same coupling to become necessary work was a memory ranges they don't necessary necessary work with the same the performance for um a kappa you typically renders it's buffers in in in thai the instead of is the of just to just make this renders blocks of from on we just make i doesn't pixels closer in memory so that on as to minimize cache misses so a few when they're outer and a buffering time and passes to display control this but has not understand this this form you're just going to have garbage so in the case of of SOC like Deborah we use try to design the engines selected and can work together nicely but if you have something that is more in the Georgians how do you how do you handle so long full for their it's not flattery affirmative that's more general question but this I think this is interesting to think about and search so so the and OK so all on the and the 1 what of the and like In work we we have an expert of of of profiteering units rather than the can of don't and how the regression because they continue to depression and the of the of the of the of the whole world of the want the what the would the and of work and what erasers something again is acid regression says nothing to do with the proper theory and the 1 located out of the way so about about sounds like never selected it I was not aware of this so so so thank you so how has ability that the the magic of our framework is it in the kernel know what you OK all should be should look at beautiful in a small amount of it that yeah yeah so most of no alright I'm just going to
do give very quick word about by the userspace changes that have to do the work for granted runs in this article to run on on to get away I the 1st thing that we did is that we amendments recognized to get to GPU and that other things to have that the media and device so we just pathogen 25 lines of code to recognize a GPU and just to try to win a triangle and to our big surprise we got a triangle add I scanning 2 hours shaking of that not about the sulfur in the in because I can view of nutrients but now when I we actually works and and after about the work that you're acute applied of the support from and log only with the red deer and all and J. miscue around and then some people from all UK company called call thing some of them our here I think of the it as of the anyway I did the same thing on non-Western and we got rid of women of the both for nothing in so this this initial of support is a versatile thing once we remove the love of your own device that we have to make use of spatial aware of that not all GPU have to worry and there you have to 4 but the way when you don't have it some more work that is required for this and which we did that is there again it's not very you the things that as of less than 50 lines in the and we can work a lot of device and we still have some some stuff to do in order to integrate seamlessly with the writer and as explained by the low-level applications from like GP and they have to understand but you cannot you cannot give some render rendering jobs to to the current device and you have to have a kind of a device and in terms of buffers between between the 2 so some words of conclusion and things that we still have to do a GET when noise is very very close to work out of the box with you using only options of where we're super close just you just have a couple of but using my in my tree and to submit a workable act I expect to to figure out but we are really really close to to have foolish in support of I want to give away from a remaining obstacles on top of the patches but we use to it is a former distribution uh and jittery requires some some some some form or to run the especially to the context which and for our for management um which newer typically reverse-engineers by me but we don't we don't have a projected to wake and and this is something that we're going to have to address anyway because starting from Maxwell of some of the former's must assigned to run for security reasons and so we have to provide will with with but promise that the can the can just always you want people used to give you so this is this is the most pressing issue we're working on is with for and the people and also with with a community hopefully it will come to you and some it I don't think something you can legally run your maxwell I and said that the set and as I said before working within the world community has been of very very nice experience with have been warmly welcomed and the we imagine have learned a lot of things about the I about the writing especially people like me when the theory on educated respected you use I really them for they have a lot of efficiency of sexually funny because when I see you and it'll come under the expected another thing about the GPU and the like and then yes a question how did you last point I don't use of full-body only have to work with justice sort to someone to some jitters but for what it's about so whether it was remained great great experience and um what we do plan to continue its of we announced last month the next the resistive which is going to be graph X 1 little Maxwell GPU we would like to provide the same level of support for for it as well and hopefully it's going to translate into into more meaningful contribution from from apart because of when when arrived for cable giver support will really excellent on both people the for Maxwell it's it's not as get so hopefully we can also contribute more caught in that on that front prazo thank you very much I have to adhere here that article did have addressed which she's around the thank you can see if you're if you're if you're interested have a jets only 192 and if you have a just and are interested in running a fully open source of fully software graphics back on it you can you can you can go over it it's uh it's it's a bit of interest screwed but compile everything you need and the answer to the that's sort of what passes them for you so I figure if you have a fever and a short on time because I just love the winter which is you might but if you have questions with you yes so that all of the life of the and this thing that I is that so I'm yet but points now the tree but an did surely 1985 was working this where I had I had a overhead for the effects of it was you just stealing of you the video output that you really have decision rule the word you guys the so OK but even at that time I was too young sorry and let you know so you have 1 operation that and on so would you like it's we we get we get how I want to say yes because it's very close but it will very soon yeah all iron and 1 thing 1 thing I forgot to mention is that we're so so here is a list of of the above on board but we have some pretty decent the from support for some of the garage consumer devices are shoe some because some people managed to do with the from from launch of public which of you so I'd I cannot guarantee this would work on it would require some work at least but that but then you can very can expect to have on something like that I was just discussing yesterday we form of of the developers of of repeated replicons and it would be very very pleased this see yeah this would be very very pleased to see to see replicants of running on the what when using this off and the food for the people of that that is necessary on what the the I think it has a from support yet we published with little device tree for it so you can you can with upstream it's under review yeah i we haven't tried everything that we tried to on it but I don't see any reason why it with work the right thank you very much and