libliftoff status update
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 490 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47435 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2020268 / 490
4
7
9
10
14
15
16
25
26
29
31
33
34
35
37
40
41
42
43
45
46
47
50
51
52
53
54
58
60
64
65
66
67
70
71
72
74
75
76
77
78
82
83
84
86
89
90
93
94
95
96
98
100
101
105
106
109
110
116
118
123
124
130
135
137
141
142
144
146
151
154
157
159
164
166
167
169
172
174
178
182
184
185
186
187
189
190
191
192
193
194
195
200
202
203
204
205
206
207
208
211
212
214
218
222
225
228
230
232
233
235
236
240
242
244
249
250
251
253
254
258
261
262
266
267
268
271
273
274
275
278
280
281
282
283
284
285
286
288
289
290
291
293
295
296
297
298
301
302
303
305
306
307
310
311
315
317
318
319
328
333
350
353
354
356
359
360
361
370
372
373
374
375
379
380
381
383
385
386
387
388
391
393
394
395
397
398
399
401
409
410
411
414
420
421
422
423
424
425
427
429
430
434
438
439
444
449
450
454
457
458
459
460
461
464
465
466
468
469
470
471
472
480
484
486
487
489
490
00:00
Computer hardwareWindowFrame problemLibrary (computing)TouchscreenQueue (abstract data type)Computer programmingSpacetimeExpert systemMortality rateComputer hardwareGraphics processing unitComputer animation
01:22
TouchscreenComputer-generated imageryFrame problemInterface (computing)Computer programmingSpacetimeComputer hardwarePartial derivativeKernel (computing)WindowRadical (chemistry)TouchscreenDevice driverClient (computing)Electronic visual displayCalculationComputer animation
02:44
Buffer solutionWindowCalculationRadical (chemistry)Open setNumberComputer animation
03:19
Compass (drafting)Buffer solutionInstance (computer science)Position operatorComponent-based software engineeringWindowComputer hardwareState of matterRight angleCalculationClient (computing)Radical (chemistry)QubitCategory of beingComputer animationProgram flowchart
04:17
VolumenvisualisierungNP-hardComputer hardwareConstraint (mathematics)Frame problemLimit (category theory)Different (Kate Ryan album)Hand fanMereologyCAN busUniform resource locatorVirtual machineNumberMultiplication signIntelInstance (computer science)TouchscreenBuffer solutionException handlingClient (computing)Constraint (mathematics)Right angleElectronic mailing list2 (number)VolumenvisualisierungArmComputer hardwareMedical imagingCursor (computers)Computer animation
07:13
Constraint (mathematics)NumberBand matrixConfiguration spaceNumberPolygon meshUsabilityConstraint (mathematics)Position operatorSurgeryInstance (computer science)WindowCombinational logicComputer hardwareBuffer solutionBand matrixReal numberMassLatent heatComputer animation
09:26
Presentation of a groupComputer hardwareCondition numberCompass (drafting)Selectivity (electronic)Computer animation
10:19
Constraint (mathematics)Computer hardwareMappingCategory of beingObservational studyPosition operatorComputer animation
11:16
StrutFunction (mathematics)Position operatorFunction (mathematics)Standard deviationScaling (geometry)Category of beingReal numberMathematicsKernel (computing)Commitment schemeVideo gameElectronic visual displayCone penetration testLattice (order)State of matterComputer animation
12:49
WindowAsynchronous Transfer ModeService (economics)Frame problemComputer animation
13:40
Texture mappingCollision detectionSoftware testingExecution unitField (computer science)Game theoryPower (physics)Maxima and minimaClient (computing)Software testingTrailPower (physics)Mathematical optimizationSign (mathematics)Buffer solutionEstimatorTouchscreenPixelCollision detectionUnit testingWindowReal numberBitPosition operatorFunctional (mathematics)MappingFunction (mathematics)Computer hardwareBenchmarkField (computer science)Point (geometry)Category of beingWordValue-added networkArmSoftware bugData conversionNumberDistribution (mathematics)Execution unitServer (computing)Library (computing)ForestMultiplication signInstance (computer science)Group actionCollisionComputer animation
18:57
State diagramTerm (mathematics)BenchmarkFunction (mathematics)BenchmarkMultiplication signAlgorithmPosition operatorKernel (computing)Commitment schemeSynchronizationFunction (mathematics)Configuration spaceTouchscreenBuffer solutionWindowMultiplicationFrame problemBit rateHeegaard splittingVideoconferencingContent (media)FeedbackComputer hardwareVotingTwin primeRoundness (object)Client (computing)MathematicsCASE <Informatik>Term (mathematics)Computer animation
22:46
FeedbackLoop (music)Term (mathematics)WindowCommunications protocolVideo gameDifferent (Kate Ryan album)Arithmetic progressionClient (computing)FeedbackBuffer solutionSystem callCompass (drafting)Combinational logicTerm (mathematics)Computer animation
23:58
Data bufferFeedbackLoop (music)Plug-in (computing)Limit (category theory)Band matrixSoftware testingPlug-in (computing)MappingLogicCombinational logicConfiguration spaceDevice driverComputer animation
25:18
Maxima and minimaGraphics processing unitArmSpacetimeImplementationKernel (computing)Computer hardwareTwitterCategory of beingMultiplication signMoving averageResultantFeedbackBand matrixLatent heatNumberRotationTransformation (genetics)Software developerBus (computing)Scaling (geometry)Flow separationClient (computing)Game theoryWindowStreaming media1 (number)VideoconferencingLaptopBitBridging (networking)View (database)Point (geometry)SubsetOrder (biology)ProteinQueue (abstract data type)Wave packetPosition operatorWorkstation <Musikinstrument>DemosceneServer (computing)CAN busForcing (mathematics)Local ringFreewareCASE <Informatik>Twin primeGoodness of fitComputer animation
32:27
Alpha (investment)LogicCategory of beingMereologyClient (computing)WindowPartial derivativeTrailGraphics processing unitDiscrete groupCASE <Informatik>Optimization problemKernel (computing)HeuristicAlgorithmMultiplication signBootingCache (computing)MultiplicationPlanningComputer hardwareSoftwareDataflowPoint (geometry)Buffer solutionRotationLibrary (computing)CalculationMathematical optimizationMappingTouchscreenRadical (chemistry)Transformation (genetics)Medical imagingHand fanPosition operatorUniform resource locatorRule of inferenceSpecial unitary groupInstance (computer science)Food energyRight angleTime zoneFitness functionSuite (music)Compass (drafting)Overlay-NetzComputer animation
39:36
Open sourcePoint cloudComputer animation
Transcript: English(auto-generated)
00:07
Okay. Nice. So, hi. I'm Simon Sir, also known as Immersion, and I'm going to talk about LibLiftOff.
00:21
So, yeah. LibLiftOff's goal is to take advantage of KMS planes. So, I'll first explain what is a hardware plane. Then I explain what is LibLiftOff, and what's the current status of the library, and then we'll see what the next step are.
00:43
So, this talk is designed for mere mortals to understand what the LibLiftOff is. So, if there are experts in the room, maybe it will be a little boring at the beginning. But, yeah. So, first, yeah.
01:02
What's a hardware plane? So, first, it has nothing to do with actual planes. So, it's hardware features in GPUs. So, before getting into hardware planes, let's just see how we get a frame and screen. How do we show something on screen?
01:22
So, basically, there's a user space program that wants to show something on screen. So, it has a frame. So, here's the typical screen with a terminal and a few windows. So, it talks to the kernel.
01:41
So, a kernel interface called KMS. So, it submits a frame to KMS. KMS then talks to the driver. So, you have a different driver depending on the GPU vendor. So, for Intel, it's i915. For AMD, it's AMD GPU.
02:02
And each driver will program the hardware to display the frame and screen. So, one important thing is that since, I don't know, a few years, we have a new API to do this, which is called the Atomic API.
02:23
So, submitting a frame is now atomic. So, you don't have a partial frame and screen. You don't have cheering and things like this. So, it's much better than before. With the legacy API, you could have corrupted frames that way.
02:43
So, the client is typically called the compositor because it will take a few windows. So, a terminal window, a calculator window, a document window here, and it will draw all of these windows
03:00
into a large buffer. So, it will actually perform a copy, and then it will submit the final frame buffer to KMS. So, nowadays, this is performed using OpenGL. So, Planes basically allows the compositor not to copy
03:23
and not to use OpenGL at all, but to submit directly the client buffers to KMS. So, here, the compositor will submit three buffers in one step, in one atomic qubit with some metadata.
03:41
So, for instance, it says terminal window is on the top left. The calculator window is on the bottom right. So, just submit this whole state to KMS, and the hardware will perform the composition directly in the scan out engine.
04:04
So, you don't copy anymore, and you have an API to set properties and all of these windows, like the position. So, why do we want to use Planes?
04:21
So, it's zero copier, as I've said. So, sometimes it's very important because some hardware needs a lot of time to perform copies, depending on the location and buffers and everything. For instance, ARM GPUs, it makes a big difference. You also get lower latency if you don't do a copy
04:44
because you don't need to wait until the copy is over to submit the frame to KMS. And it also improves power consumption because when you use Planes, the render engine can go to sleep.
05:03
So, the part of the GPU that is used for OpenGL is not used anymore and is not using a battery. Only the scan out engine, which is sending frames to the screen, is still awake.
05:23
But Planes come with some downsides, too. They are pretty hard to set up and use, especially when you don't control the buffer, especially buffers come from clients.
05:41
So, right now, compositors don't really support Planes. There's one exception. Weston supports Planes pretty well. But apart from that, everybody just always composites the whole image.
06:01
So, one little exception is the cursor plane. So, when I move my cursor here, most compositors are able to put it in a plane, a special plane called the cursor plane. But that's the only thing compositors are able to use, except Weston.
06:20
So, one other issue with Planes is that they are, so, I said Planes are hard to use, and that's because they come with some constraints. So, for instance, here are the Planes I have on my Intel machine. I have three Planes. The first Plane here is able to display buffers
06:44
with only a few formats. So, for instance, it can do CA8, RGB 565, and all of this in the list. The second Plane can do a different set of formats with wide UV formats. And the last Plane is only able to do ARGB 8888.
07:04
So, you can see that I only have a limited number of Planes with a limited number of formats supported in each Plane. So, for instance, let's take an example with these three windows and the three Planes I have.
07:23
So, for instance, if this window is using ARGB, this window XRGB, this window XRGB, then I can put everything into Plane, and everything's fine. But if the first window is using XRGB, then I can do that, and I can't use Planes.
07:41
So, Planes come with a large number of constraints. So, we said number of planes, formats, but there are also some constraints on the buffer size, some constraints on the Z position. So, which Plane is over which of a Plane? So, for instance, on my machine, the first Plane would be under the second Plane,
08:01
which is under the last Plane, and I can change that. Some hardware is able to change the Z position. There are also some bandwidth constraints. So, if I use windows that are too large, I can't put them into Planes. And a lot of real stuff, for instance,
08:24
on my YUV Plane, if I want to, on my second Plane, if I want to display YUV buffer, the position must be even, I think, and if the position coordinates are odd, then it doesn't work.
08:42
So, that's Intel-specific, of course, and every vendor has some vendor-specific constraints like this. So, yeah, that's a mess. And the only way we know whether some combination will work on it is to perform what we call
09:01
an atomic test-only commit. So, we basically say, hey, I want to display this window and this Plane, this other window and this other Plane, and we ask the kernel, we ask KMS, will this work? And KMS says yes or no, but we don't know why.
09:21
So, yeah, that's how we use Planes, basically. So, now we can dive into what's libLiftOff. So, the goal of libLiftOff is to make it easier to use hardware Planes. One goal is also to not abstract too much, to be as thin as possible, a thin layer of abstraction,
09:47
to not get in the way of the compositor. So, if the compositor wants to use special other features, we want to let the compositor
10:01
customize a lot how it uses Planes. So, we had a workshop at XST last year, and so this presentation is basically what we've done so far and what are the next steps. So, the basic idea behind libLiftOff
10:23
is to expose some layers. So, layers are virtual Planes. They are the same as Planes. You can set the position, you can set the buffer, you can set a bunch of properties in them, but they don't have any constraint.
10:41
So, you have as many layers as you want, and you can set any property in them. It's fine. So, basically, compositors can use layers just like they would use Planes.
11:00
And then libLiftOff will see which layers the compositor has set up, and then try to map them into Planes. So, yeah, libLiftOff performs a layer-to-plane mapping. So, let's see a very basic example.
11:23
So, it's a pretty simple API. So, for each GPU, you can create a libLiftOff device, and then for each device, you have a bunch of outputs. You can create an output, a libLiftOff output.
11:41
And then you can create as many layers as you want. Here, I create a layer. I set the frame buffer ID. I can set the position. I can set the scaling method and a bunch of stuff like this. So, these are just standard Plane properties.
12:01
Yeah, probably a real world example, we set up more layers like this. And then, the compositor can just call this function, liftOffOutputApply. And this will fill an atomic commit with all the Planes state.
12:22
And the compositors can just perform the atomic commit. So, send all the properties to the kernel and to display a new frame, and that's all. So, all the compositors have to do is set up a bunch of layers, and then this was done, this is like before.
12:41
Like, this doesn't change. Compositors were already doing that before. So, one small problem with this is that sometimes some window cannot make it into a Plane.
13:02
So, we've seen before that if this uses xRGB, then I need to fall back to composition. But I don't need to do that for, yeah. So, in this example, if this two can be put into a Plane, we must fall back to composition to copy them into a large frame buffer,
13:21
like before with OpenGL. And if this makes it into a Plane, then we don't need to copy it. So, this is a mixed mode where we use Planes for some windows, but not for all of them.
13:40
So, the way it works is that, so, yeah, one important thing before, how we can manage this situation, is that liblist does not perform any composition. So, the compositor is still responsible for using OpenGL and the copy window buffers into the larger buffer.
14:04
So, basically, compositors will have this larger frame buffer. They need to mark it to tell libLift off, this is the composition layer. So, please, if you need to fall back to composition for some windows,
14:21
please use this layer to indicate I need to copy, to perform some copies and this. So, there's a function for this, and then compositors need to, after calling the function that performs the mapping,
14:41
libLift off output apply, compositors need to check each layer, and if the layer couldn't be put into a Plane, then the compositor needs to copy it with OpenCL. So, the current status is that,
15:06
so, the layer to plane mapping works. We have some support for collision detection. So, for instance, if you have two windows that don't collide, then you don't care whether a plane is on top of the other or not.
15:20
You can just put the two windows into planes, and you don't care about the relative position. We have support for basic incremental updates. So, if you only update the buffer property, and you don't update any other property, we can reuse the previous mapping we had.
15:43
We don't need to recompute the whole thing. And one important item is that we have some unit tests. I feel these unit tests are pretty important, because it's very easy to change a bit of the algorithm, and then get it wrong, and it's very hard to debug.
16:03
So, we have a mock libdarm library, which fakes some hardware planes, and checks that libdift does the right thing. So, I also focused on doing some field testing.
16:23
So, making sure that in the real world, it makes sense to do what libdift does. So, the first thing I've done is starting a glider,
16:40
which is an experimental compositor using libdift. This is just to prove that the API is fine, and that with real clients, with real windows, it works. So, the goal, I'm working on the WL roots, which is the Wayland compositor library,
17:01
and I'm working on making it ready for libdift, so we need to add a bunch of new APIs, and refactor some things. There's also Valve is working on a compositor called Gamescope for the SteamOS distribution,
17:24
and it will use liblift as well. So, that's pretty cool to have some other people trying it out. So, with Glider, I've performed some very, very early benchmarks.
17:41
So, we can see here that, so the compositor is the first item. This test was done by using a very simple client that just was a gradient, in a 250, per 250 pixels buffer,
18:03
so a pretty small buffer. And we can see that here, the power estimate is like, yeah, it's pretty good. We use less power than before, so that's a good sign. But, yeah, again,
18:22
we need to take a step back when reading these numbers, because when using composition, there was no optimization like damage tracking or anything like this. So, when doing compositions, a whole screen buffer was copied each time, and when using planes,
18:42
they'll absolutely not copy what server. So, yeah, that's pretty good, but maybe real-world scenarios will be not as optimistic as this, and clients will probably use a lot more power, of course. So, yeah, what's next?
19:03
So, you have a bunch of short-term goals. The first one is to perform more benchmarks. One issue is that when LibLiftoff has a bunch of layers on it to try to map as many layers as possible to planes,
19:22
the algorithms takes quite a long time because we need to perform a lot of atomic commits. So, we need to take a layer, put it into a plane, and ask the kernel, will this work? If it does work, take the second layer, try to put it in the plane, ask the kernel again,
19:40
is this fine? So, this takes quite a long time, depending on the hardware configuration. So, I have some ideas to try to find the best layer to play in mapping a little faster. We need to test more hardware and see how it behaves,
20:02
but one important thing is that the incremental updates saves a lot of time. So, most of the time, you only change the buffer. You don't really change the position of the windows a lot. So, you just update the contents of the window.
20:24
So, incremental updates are very important to mitigate this also. So, the goal here is to not miss a frame. We basically have a budget for if the screen refresh rate is 60 hertz,
20:45
then we have 16 millisecond to do everything, to draw a new frame. So, yeah, we need to be fast. We need also to better support multi...
21:01
Oh, yeah, so layer priority is an important thing. So, if a window updates a lot, so, for instance, if you have a video player, each frame will be a new one, and you will never basically reuse the previous frame, then you really want to put it into a plane
21:22
to avoid having to copy the thing. So, you want to, yeah, you want to try to see which layers are updating more often than the others and put them into planes in priority.
21:42
I'd also like to have better multi-output supports because right now, the first output that comes will take all the planes it can take. And if you have another output, then it won't have as many planes. So, we need to be fair when splitting planes across outputs.
22:03
And also, we need a way to migrate planes. So, for instance, if I use 10 planes and one output, and a new output comes in, I need to migrate planes from the first one to the other one. And it's a little bit tricky because sometimes the output
22:22
are not refreshed at the same time. So, for instance, if both outputs have different refresh rates, yeah, there's some synchronization issues here. I have also a bunch of long-term goals.
22:42
So, the first one is to have a feedback loop. So, basically, the idea is that right now, clients allocate buffers, send it to the compositor, and the compositor needs to do the copy. And so, the client decides what the buffer format is. So, here, the client decided xRGB.
23:01
But then, the compositor cannot put it into a plane. So, the idea as a feedback loop is to not just be sad and say, okay, I can't put the window into plane, life is terrible. The idea is just to add a way for the compositor
23:20
to say, okay, to say to the client, I can't put this format into a plane, but if you allocate using this different format, ARGB, then life will be better. And so, the client can do it. So, we need a little protocol to, I'm working in Wayland, so I'm working in Wayland protocol to do this.
23:42
Some kind of feedback loop to say, okay, use RGB, and the client says, okay, I'm using ARGB now. So, yeah. This is a work in progress. It's called the DMA-Buff Hints.
24:00
One of our long-term goal is to have driver-specific plugins. So, as I said, right now, we have only the Atomic Test Only API to know if the plane combination will work. So, the idea will be to have some driver-specific plugins inside Lively Stuff.
24:23
So, we could have logics to say, okay, in Intel, I know that this won't work, so I won't even try it, and things like this. And I know that the bandwidth limitation is this limit, so I won't try to go over it. So, this will allow us to be more clever
24:41
when doing the plane mapping. And the last item, but yeah, it's for the future, is Exotic Configuration. Sometimes you have planes which are under the composition layer, so you need to draw a hole into the composition plane to be able to show some planes under it.
25:03
So, it's very, very, yeah, tricky. We'll see if it's worth it. So, there are a bunch of references here, if you're interested, and thank you for attention.
25:20
Yeah, feel free to ask questions. Yeah, go on, David. How do you know which window to put into LiftLiftOff? Is there anything in there, or just the one
25:43
where you don't care about the games and video players? So, can you repeat the question? Sorry, how do you know which surfaces you should put into LiftLiftOff? Do you put in every window that we have, or just the ones that have a high frequency? Okay, so I'll repeat the question for people on the stream.
26:01
So, the question is, so this is a question from the compositor point of view. So, as a compositor, do I put every window and create a LiftOff layer, or do I do something else? So, yes, the answer is basically you take every client window, you create a layer,
26:22
and then if it's put into a plane, LiftOff will tell you, okay, you can put it into a plane, and if it doesn't, then you need to fall back to composition as you were doing before. Yeah? I was wondering, is it typical for your graphic cards to have only three layers, or something more, or?
26:42
Okay, so the question is basically how many planes do we have generally? So, my Sunday Bridge laptop is pretty old and only has three planes, so that's not a lot. I know that newer Intel hardware has a lot more planes,
27:01
I think like seven planes or something, and some ARM GPUs basically don't have a limit, don't have a maximum number of planes. You can have as many planes as you want, but you have some bandwidth limits. So, they expose 10 planes, and you can try to use them, but at some point, it won't work anymore.
27:22
So, yeah, the trend is basically, we'll get more and more planes as we keep going. So, that's pretty good news. Yep, yeah, so we've talked also about
27:48
libLiftOff IOCTL. So, basically, you set up your layers and send all your layers to the kernel, and then as the kernel, will this work, and do I need to composite anything on it?
28:03
Yeah, there's been some talk about this. I think it's important here just to start with this, and then if it works, then we can discuss about maybe putting it into the kernel, or maybe having device-specific plugins. We'll discuss with the kernel developers
28:21
what makes most sense, but for user space, they could just keep using this API and it won't change. It's basically an implementation detail, so we can change how it works behind the scenes, and it won't matter a lot for user space.
28:44
Yeah, the idea here was just to use the current API as it was designed, and then, depending on the result of the experiment, will we continue using this API or not? We'll see.
29:01
Yep. So Xorg, yeah. I don't think Xorg is using hardware planes, is it? All the Xvideo stuff from the 2000s.
29:23
Okay, so. So it was able to, yeah. But so right now, if I have Xorg loan session, will it use plane or not? Yeah, yeah, okay. So yeah, okay, so legacy. So there's some legacy. It was able to use planes at some point,
29:41
but the new stuff is not able to. All we know devices, 74.
30:01
74, are you kidding me? For each pipeline, okay, all right.
30:22
So Ice Lake, eight planes. So eight planes per pipe, right? So per output, okay. So 42 planes total, distributed across. I'm saying like eight times four. Okay, okay, okay, okay.
30:41
So I think for most desktop applications, like eight planes is pretty good already. Not need a lot more. So we can do like 10 more minutes of question time because Eric is a bit late, he's stuck in the bus. Okay. So you can't go off with 10 more minutes. Feel free to discuss.
31:02
Yes. So what questions? Transforms properly. So some planes sometimes can and can't scale. They can and can't rotate. They can only rotate in specific formats and separate segments. Yeah. Are we adding that as well? Or is that already there? So the way you do scaling is basically
31:23
setting some properties on the planes. And layers are, you can set any property on it. So basically you can set the scaling properties and it'll just work. It will just try, so on each plane to use it and maybe it was, maybe not.
31:41
If it doesn't work, it will ask you to fall back to position. So everything, so you can set any property. So all plane features should just work, basically. But would you optimistically set every property and then it could fail? Or would you like roll back? So you can set a property that doesn't exist and then it will fail always.
32:02
But then you don't have feedback on which property. So like if you have y position, it's z order for example, which you expect to work. But then you want to go more like transform, which is less likely to work. And you have maybe 10 properties of which four of them think are gonna work
32:22
and six of them, some subset of them might work. Is there a way? So basically the idea is that when you want to do a transform, if you can't do the transform, in any case you'll fall back to composition, right? Right, but you don't want to. So what I'm saying is that falling back to composition, it would be a step optimal. There's a more optimal solution
32:41
that doesn't include transform. Like maybe only one of your windows is rotated. So what's the situation? Can you describe like this stuff which? So you have terminal here and the calculator here and the calculator.
33:01
Just rotate it. You have one rotated client and one non-rotated client. And ideally you can put them all in planes and then let the GPU rotate that one client for you. So you might specify with the lib lift up layer, I have this one that I want to rotate, this one I don't want to rotate. Yeah, yeah, sure. And maybe we extrapolate this for another five properties that are maybe some kind of exotic
33:21
like this one is Y-E-B and this one is something else. And it fails because the GPU doesn't support rotation. Okay. Does the compositor expect to just peel off properties one at a time and keep trying so it takes off the rotation and tries to get something else? So if you take out the rotation, then your final image won't be the same, right? But then the compositor could just rotate it in software
33:42
and still use the plane for the other client that doesn't need to be rotated. So, yeah, yeah. So basically that's what happens. So if the rotated client can be put into a plane. So it goes to the composition. So it will ask you to composite it, but the other client can be put into a plane and that's fine.
34:01
Are there plans for multiple composition layers? So like you could composite onto a plane and then composite onto the background? Yeah, so that's more complicated. It's planned, but we'll see for the future. Just doing this will already be green, so we'll see. Yeah, having multiple composition layers is like, yeah.
34:24
Hurry. More questions? Yeah, go on please. More on the criteria for locating the layers to planes are very dependent on a lot of things as I understand.
34:42
Yep. I expected it might very well be that you have to frequently reschedule things, as if you move windows and for example, one coordinated was even becomes odd. And I don't know what happens. So you probably have to change your location.
35:01
And maybe at some point you didn't expect, at some point you could avoid composition and immediately you have to start using composition. Is the, I expect that this could be a problem if your composition flow is called caches and it's not being used recently, it could be quite slower the first time you use it.
35:23
Because you need to like, boot up the run engine. Yeah, I don't know if it takes time to, we need to do more experiments. Yeah, I'm not sure. Need to ask the device, the kernel driver's case to see if that would be an issue.
35:47
Yeah, you basically don't want to use discrete GPUs for composition anyway, because if you unplug it, then you don't have any.
36:00
Yeah, there's a lot of complicated issues with the plane, layer to plane mapping algorithm. It's pretty annoying to, yeah.
36:21
Yeah, so in any case, if something too hairy is going on, then it will fall back to composition anyway, so yeah.
36:43
Yeah, I'm not a fan of like, letting the compositor say, okay, the guy is moving the mouse and other things, so I'm gonna do some heuristics too. Yeah, we need to do some experiments to see if it's worth it or not. Yeah, sure.
37:13
So, partial updates, and windows being occluded behind other windows. So, partial updates should, so for planes,
37:22
you don't really need to say to the GPU whether only a region, oh, actually, there is a property for this, yes. So, yeah, actually, you can set up a layer
37:43
and say, I only updated this region of the layer, and then some hardware will be able to make use of this. So, the compositor without liblyftf already does damage tracking, so it keeps track of which parts of the screen
38:01
that have been updated since last frame, and then only repaints the parts that need to be repainted. So, it could just set the property and the layer, and it would be fine. And about, so right now, I don't have any logic to not show layers that are completely occluded
38:22
behind other layers, but it's a little difficult sometimes because if you have transparency, for instance, you still want to draw the layer even if it's completely occluded behind. So, the compositor will have more hints. Some clients can say, my window is completely opaque,
38:43
so it's fine if you don't do anything under it. So, compositors have more knowledge to do this, and should already do this for OpenGL. I'll see if, yeah, I don't know if it's worth it to add this to liblyft. Maybe in the future we'll see.
39:01
Depending on the buffer format, if it's xRGB, for instance, we know it won't have an alpha channel, so we can say it's opaque and don't do anything under it. Yeah, maybe, we'll see.
39:27
Okay, yeah, sure.