Movit: High-speed, high-quality video filters on the GPU
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 199 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/32578 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2014153 / 199
2
3
10
11
12
13
14
17
18
19
23
24
25
27
29
45
46
47
48
49
50
51
52
55
56
57
58
65
67
68
70
72
74
75
76
77
78
79
81
82
84
86
87
89
90
93
94
100
101
102
103
104
106
107
109
110
111
112
113
114
115
119
122
123
124
126
127
128
129
137
139
141
143
145
147
150
154
155
158
161
163
164
165
166
168
171
174
175
176
179
182
183
185
188
190
191
192
194
195
196
00:00
Streaming mediaFilter <Informatik>Library (computing)Open sourceMultiplication signQuicksortControl flowMereologyModel theoryProjective planeBitFigurate numberMassComputer programComplex (psychology)Logical constantLecture/ConferenceComputer animation
01:10
Point (geometry)QuicksortInterior (topology)CodeStreaming mediaPixelGreatest elementAssembly languageScalar fieldBitProjective planeModel theoryWritingVector spaceSoftwarePosition operatorFreewareCompilerBefehlsprozessorMultiplication signElement (mathematics)Moore's lawCore dumpVirtual machineParticle systemPasswordProcess (computing)Lecture/ConferenceComputer animation
03:48
Graphics processing unitComputer-assisted translationLaptopVirtual machineRobotQuicksortNamespaceGamma functionLinearizationPixelFormal languageRight angleComputer programCurveDefault (computer science)Revision controlCodeLine (geometry)PhysicalismRing (mathematics)File formatBefehlsprozessorHeegaard splittingVector spaceComputer animation
06:26
Color spaceInterpreter (computing)Different (Kate Ryan album)Sound effectNumberSoftwareGraph coloringPlanningStreaming mediaQuicksortGraph coloringRight angleMultiplication signOcean currentStandard deviationWebsiteVideo gameComputer animation
07:23
Sound effectChainGraph coloringTrigonometric functionsBitGoodness of fitMultiplication signMultimediaSoftware testingExecution unitCommitment schemeCodeColor spaceComputer programFunction (mathematics)Gamma functionCompilerCore dumpLoop (music)Line (geometry)VolumenvisualisierungMessage passingQuicksortMonster groupOrder (biology)PixelData conversionUnit testingoutputInferenceRevision controlProcess (computing)Workstation <Musikinstrument>Inequality (mathematics)Virtual machineMoment (mathematics)Form (programming)Right angleLevel (video gaming)Digital divideVideo gameSet (mathematics)MathematicsSystem callArithmetic meanPerfect groupBlock (periodic table)
10:32
DeconvolutionSlide ruleMultiplication signSound effectÜbertragungsfunktionConvolutionState observerAuditory maskingKey (cryptography)Vector spaceComputer animation
11:26
Demo (music)Image resolutionQuicksortException handlingGraph coloringStandard deviationMobile appDrag (physics)GUI widgetRevision controlAlpha (investment)Graphics softwareUsabilitySound effectData conversionProjective planePoint (geometry)Chemical equationProper mapColor spaceInheritance (object-oriented programming)Streaming mediaSoftware testingMultiplicationLaptopReverse engineeringBefehlsprozessorVector spaceSoftwareOcean currentBit rateAnnihilator (ring theory)CyberspaceElectronic visual displayRight angleMemory cardDigital electronicsTheoryLattice (order)Video gameParticle systemRange (statistics)Virtual machineMathematicsSource codeComputer animation
14:26
Demo (music)Generic programmingLibrary (computing)Client (computing)Patch (Unix)Streaming mediaBitPiQuicksortMultiplication signDevice driveroutputVirtual machinePlug-in (computing)Software bugTraffic reportingKernel (computing)Component-based software engineeringContent (media)InterpolationProjective planeFilter <Informatik>Limit (category theory)TranscodierungPoint (geometry)BefehlsprozessorDisk read-and-write headFirst-order logicControl flowData storage devicePlanningDistanceSemiconductor memory1 (number)Right angleComputer programPower (physics)Scaling (geometry)Term (mathematics)Line (geometry)Group actionSource codeComputer animation
17:43
Computer programChainTexture mappingPhysical systemCASE <Informatik>Sampling (statistics)Social classSound effectPlug-in (computing)Streaming mediaNeuroinformatikSemiconductor memoryPixeloutputOpen sourceFloating pointMultiplication signShape (magazine)Graph coloringSlide ruleAlpha (investment)Group actionThermal expansionDecision theoryVideo gameCompilation albumSoftware testingBitSeries (mathematics)Level (video gaming)Right angleExtension (kinesiology)Classical physicsOperator (mathematics)CodeDirected graphBefehlsprozessorEmulatorAuthorizationGoodness of fitCodierung <Programmierung>Message passingString (computer science)Point (geometry)Compiler1 (number)Data compressionDemo (music)PlastikkarteHeegaard splittingProcess (computing)Shader <Informatik>Gamma functionMereologyLecture/Conference
22:51
Eigenvalues and eigenvectorsPattern recognitionCompilerQuicksortSoftwareShader <Informatik>Machine visionPlastikkarteInstance (computer science)Presentation of a groupChainContext awarenessInclusion mapCASE <Informatik>BitRight angleTexture mappingoutputStreaming mediaOpen sourceRevision controlFreewareNeuroinformatikBefehlsprozessorFacebookAreaRoboticsLibrary (computing)WeightVideo gameMultiplicationChemical equationPixelShape (magazine)MathematicsSystem callTraffic reportingUser interfaceOpen setSound effectGame controllerLecture/Conference
27:58
GodLecture/ConferenceXMLUML
35:52
JSONXML
43:45
GodXMLUML
51:39
2 (number)Speech synthesisJSONXMLUML
59:32
Computer animation
Transcript: English(auto-generated)
00:00
the way back after the second lunch break. My name is Steiner. I will be talking about Moovit, which is my newest hobby project, more or less. And as you can see, it has this sort of pun in it. But it stands for the Modern Video Toolkit. And this is, of course, dangerous because anything called modern usually isn't. Anything called toolkit is usually hideous.
00:23
But it tries to be a high-performance, high-quality open source library for video filters. And let me actually talk a bit first. What do I mean by high performance? And to do that, we'll sort of go back 25 years in time and see, this is ANSI C. I'm sorry to find this a bit small because there will be more things. But this is actually 25 years ago.
00:41
ANSI C has held up remarkably well. But this is how you do a fade if you were programming this in 1989. I've stripped out all the complexity. This is only gray-scale. There's no alpha. There's no nothing. But you take in an image A and an image B. You want to fade between them. You multiply one by some fade constant F. And you multiply one by the other constant.
01:00
And then you add them together. It's plain and simple. And then someone figures out, oh, maybe this is a bit slow, so we rearrange it a bit. We move things around, so we only have one multiply. And this is also a standard trick. And then people come along and say, it's still too slow. And the reason why it's so slow is we're using floating point. And it's not really that floating point itself is slow.
01:20
Floating point has been really, really fast for most of the last, at least the last 20 years. The thing that's slow is moving things back and forth from ints to floats. So we're doing fixed point. We're just multiplying things a bit more tricky. We need to round a bit. We need to shift a bit. But it's still like the same thing. Now you come to MMX.
01:43
1997, around then, Intel figured out that people want to do the same things to many different values. We have all these pixels. We have all these pixels over here. Let's do them four at a time. The problem, of course, is it becomes very, very complicated. I wrote this. I used at least half an hour to get it to work at all.
02:01
It's probably not very efficient. It's even worse if you want to write pure assembler. And before we talk about auto vectorization, I can tell you no compiler in the world will auto vectorize the previous things, this one, because we reduced position. And in general, auto vectorization is not something you can really rely on.
02:20
So you need to write something like this. And of course, right at the bottom is still our old friend the scaler thing for fallback for the last few pixels. And then we invent not only MMX, but SSE, SSE2, AVX, AVX2. I hear there's now coming AVX512. And you need to write code for every single one of
02:40
these, and then Intel comes along and says, hey, we have multiple cores. We cannot do more slow anymore. We need you to split your work. OK, so you need to do p threads, whatever, fire up threads. And if one of them is slow, you will have a problem. You need to distribute it somehow, more intelligent than I have done. And this is, of course, on top of all the SSE2, AVX,
03:01
whatever, whatever, whatever. OK, so we've sort of figured this is 10 years ago, right? We have multi-core. We've figured out how to do this. Let me now show you how people actually do this. This is 2006. This is from a project called Freilich. If you have ever edited a video on your Linux machine, you are most likely fading using this software, or
03:22
something very much like it. It is still, as you can see, completely scalar. There's no AVX. It does not support multi-core. So what the heck? It turns out this model of doing things efficiently on the CPU is very, very complicated.
03:41
So people don't do it. At least we don't in the free software world. Maybe you do so with your own Windows, but this is not reality. So fast forward to 2014s and GPUs. You cannot really buy a desktop machine anymore without a GPU. And if you're lucky, you get a cat.
04:00
Actually, you cannot buy a laptop anymore without having a GPU, and maybe a cat. You can't buy a cell phone anymore without a reasonably powerful GPU. And this is a bottle deposit machine. It has a GPU. So they are no longer uncommon. We should not anymore assume the user does not have a GPU.
04:24
So how does it program it? This is GLSL. This is the most common shading language that you would use to program your GPU. And this is the four lines you need to do everything the CPU version did with full vectorization, with full splitting, with full handling of all edge cases, yada, yada, yada.
04:41
And Moovit uses GLSL. If you would write this in Moovit, you need only a little more glue around it. This is basically because Moovit is able to chain many of these things together. So it just needs some name spacing and things like this. It is refreshingly simple. And I think this is maybe the thing about using GPUs for anything like this.
05:01
It's not necessarily that it's so much faster by default. Well, it depends a lot on your GPU. But I'm pretty sure that on the laptop, I could probably beat a GPU for most of these things if I really, really wanted to. But it's so simple to get high performance. It's refreshingly. So let's see what we can do about this.
05:22
And when I say high quality, this is, of course, now, I'll try not to diss a lot of people too much. This is one of the wrong things you can do. I'm sure, I mean, this is a graphics deck room, right? I'm sure everyone knows this. But I will say it nevertheless. The pixel values that we store in our images are not linear amounts of light.
05:41
They are a gamma compressed format, which means that they have some sort of curve. And if you do any sort of arithmetic on them, you need to convert them into the real value first. If you don't do this, you end up with the situation here on the right. I've faded it like 50-50 together, an image that is dark and light. And you can especially observe that the wrong things come
06:03
through, like this bright spot here is way too visible. These edges here are just way too visible. It isn't right. The left one is how it's supposed to look like. This is how it'd look if you took physical grains of film and put them on top of each other. So this is one wrong thing you can do, not converting the right base.
06:20
And of course, the example code that I showed does not do this correctly, of course, because it's just a simple thing. You can use wrong color spaces. Now, I never trust beamers, right? So I sort of exaggerated the effect here. I've done things three times. But we talk about RGB. We never really say what red, what green, what blue.
06:40
And it turns out there are a number of different interpretations of this. And if you don't use the right one, your image on the left can end up like the image on the right with wrong colors in some way. So you really need to care about this sort of thing. This is not that visible, so I've zoomed it in.
07:01
This is what happens if you put your chroma planes at the wrong site. There are a lot of different standards, or eight per standards. If you put them wrong, you get these color fringes around here. So you need to really get this right. And before you ask, a lot of current Linux video software does this wrong. The effects are subtle, but you will see them if you
07:22
know where to look. And finally, when I talk about quality, I mean more than just image output quality. Quality is not a feature in itself. It really is a process. It really is a commitment to how you want to do things. Moveit has unit tests, then also tests that you can actually say, if the tests run in your machine, most
07:42
likely everything will work. And I really, really try to say this as, when you think about it, you want to make something that works and doesn't give people the wrong idea of how to use it. You really want to convince yourself you're doing the wrong things. A lot of these things seem to be really simple.
08:01
They're like, don't do mistakes. But they are minimally hard. They're so easy to get a one-pixel placement wrong. Do I add one? Do I subtract one? Do I divide by two? Do I multiply by two? My education is in multimedia DSP, and I still get these things wrong, right? So I try to have tests. And every time I write a test, I feel like this is a way too stupid test.
08:20
This cannot possibly fail, and it fails. You might be perfect programmers. I'm not. So again, 90% code coverage. So let's see how it is to use. Moveit uses chains of effects. I usually say that if your hello world is more than five lines long, you've failed.
08:41
So in a sense, I've failed. Well, I'll walk you through a bit what you do. First, you define an input. You say that it's in some color space. You say it is in some gamma. You define, well, OK, the order is bjre. It's unsigned by all this. You add it. You add an effect, in this case, a glow effect. You add an output of some format. You set how many output bits you want.
09:01
And you finalize. Now the chain is locked. Now it gets compiled, uploaded to your GPU. You can run it very, very fast the many times you want to run it. You just set, oh, here are some new pixels. Render the screen. This is how your main loop typically looks like. And doing this, we'll create this chain here internally. This is a debug dump from where Moveit works.
09:21
You have an input. You have a glow effect. And you have an output. And then internally, the first thing Moveit does is write this off to some big hideous monster. Basically says, well, the glow effect, that's sort of I want to cut out the highlights. I want to blur it horizontally. I want to blur it vertically. And I want to add it back to the first thing.
09:40
And then this is, of course, way, way, way too small to read. I'm not expecting you to read this. But after like 19 passes of this mini compiler thing, it has inserted all these gamma conversions that you need. It's split things up to phases. You can see the different colors here. These are different phases because sometimes its advantage is not to do everything chained. And it's eventually created programs for you.
10:02
And all that will turn this image here to this image here. More effects. I don't have a lot of effects in Moveit. And the reason is I would rather have good effects than many effects. So each one of them, there are only like 10 or 12 currently. There will be more. But each of them are fast.
10:21
They work. They don't give you any surprises. They look good. Vignette, fair and simple. Just darken the corners a bit through some cosine squared law. This is Sharpen. This is not an unsharp mask. This is actually a deconvolution that tries to invert the transfer function of the lens.
10:41
So you don't get this halo effect that you get. I don't think you can see it too well on the slide though, unfortunately. But usually it's better than a sharp mask. Saturation, desaturation. Observe again that since we are operating in linear space, which we should do, the image as a whole does not get more darker or lighter.
11:00
It just becomes more or less desaturated. And finally, this isn't actually in yet, but this is LensBlur through FFT. It actually does a full convolve with whatever you want. And of course, when you have convolve, you can also go crazy and convolve with a star or whatever you want to. If you want a hexagonal blur, then fine by me.
11:25
So demo time. And I really hope that my demo actually works in this resolution. It does. This is some sort of, I mean, these widgets here are just something I threw together, right? They're not meant to be a UI thing. But they're like the standard color correction tools that
11:43
you can find in almost anything except Linux. So you can sort of drag your mid-tones around, your highlights around, and you can make it as ugly as you want to. But of course, demo app is boring, right?
12:03
This is KDN Live. This is, at least if you ask me, the only usable video editor on Linux right now. I hope there's no Blender people here to take offense. And this is like my early alpha pre-hacked version that actually uses Moovit through a project called MLT.
12:22
Of course, I mean, I'm pulling down a clip here. Right now, Moovit isn't really doing a lot, right? It's only doing color space conversion and things like that. But I can pull out an effect here. Let's say I want to blur it. And I can blur more or less. And you can see, this happens in 60 FPS.
12:42
This is 720p. It's a small window, but it's not cheating. It's actually editing in 720p and just scaling down for display. If I'm bored with the blur, of course, I can do some sort of fade, which will work properly. I can, of course, also do, let's see, white balance.
13:05
I just need to find some gray point in this clip here. Let's just focus on here. This is gray. And now gray actually stays gray. There are multiple Linux white balance things where gray does not become gray because no one ever wrote a test for it, and it never worked.
13:21
I'm not kidding you. This is frustrating. You can change the color temperature. It's like super, super warm, right? It still doesn't become warped in any way. It doesn't clip, again, because we're working in linear space. And you can now sort of play and afford, oh, I think I forgot to set a reverse lag on the clip, so it's fading the wrong way, but this is just a UI issue, right?
13:42
So again, this is, I think, well, in the fade, I don't think I can do 60 FPS. I think this is 30 FPS because this is a four-year-old ultraportable laptop. This is not something high-end in any way. And I think that's mostly because we're decoding the video on the CPU.
14:00
So really, if I wanted to do this in the current software world, this would be five FPS, maybe. Now it's 60. And this also means, of course, that if you have a much better machine, like say any mid-range Nvidia card, you can probably go 4K video. Directly on the GPU, no tricks.
14:21
Just do things, move around, do whatever you want. I think this was the demo. Yes, future work is always the best part, of course. There will be more filters, but they will be added again slowly. Because I don't want a lot of filters that don't work.
14:41
There are some limitations in the YCVTR handling because it's actually surprisingly subtle to get all the scaling correctly, all the interpolation. I want better interpolation. Interlaced content? How many people here like interlaced? No hands, right? And unfortunately, of course, interlaced video is a
15:01
reality many of us still have to deal with. And even though it's sort of the modern video toolkit, I think I might actually have to succumb a bit here and support it eventually. And then, of course, whatever clients need. Because MoveIt is not a program. MoveIt is used by currently MLT, which is this generic
15:20
video library thing that can do all sorts of things, which is in turn used by Shotcut, KDN Live, probably a few others, Flowblade. So again, whatever clients come to me and say, oh, I need a shared library, right? This is the kind of things that obviously I want to support.
15:42
So you will find that this KDN Live patches have not been released until now. I just uploaded them like an hour ago. So if you're really, really brave and happen to have a machine that's sort of like mine, I can promise you it stopped hanging the input drivers in the kernel. It used to. I think it might actually not segfault on
16:00
NVIDIA all the time. So if you're brave, you can go and get the library there, you can get the KDN Live patches there. And edit and report tons of bugs to me. So questions? Please hand up for the mic. I have two questions for you. The first one is, do you aim at providing a plugin for Gstreamer, because it's very useful tool for making
16:23
pipes of content. A plugin for? Gstreamer. Gstreamer. Not immediately. I would be very, very happy if someone wanted to add MoveIt support to Gstreamer. I don't, unfortunately, know Gstreamer very well, per se. One challenge here, though, is that for best possible
16:43
MoveIt performance, you really, really want to be able to chain things together. So if you have three successive MoveIt filters, you want them as one Gstreamer component. You don't want them as three different ones. Because then you have to go all the way to the GPU, do something, go down to memory, go down to the CPU and up again, which really, really hurts performance.
17:01
A lot of the work that's gone into this is about keeping it on the GPU, chaining it efficiently. But if you want to do Gstreamer support, and know Gstreamer much better than I do, then I'll be happy to talk to you. And my other point is, as a user, I'm looking for a tool that uses a GPU to do the transcoding of a video format.
17:22
Are you aware of some? Or maybe your project sometimes, I don't know. So you're talking about the GPU to transcode, right? And there are sort of two ways here. There's video decode, which is pretty well supported now, right? At least if you have NVIDIA, you can do a video poll. My kid in the pie patch doesn't support it right now. But it can certainly be done.
17:41
You can play video, right? Now encoding the other way is much, much more tricky. That's a very involved thing. You will be aware that Intel has something called QuickSync. QuickSync, which essentially does a live H.264 encode. There are some commercial GPU video encoders. I don't think there are any Linux ones.
18:02
Basically, this is because this is a very, very complicated thing, right? I hear there's talk for x264 to use OpenCL to try to accelerate. It already does it. Okay, someone says that it already. It crashes a lot, it's very bad, it works very minimally. Okay, just to repeat for the stream, there is the message here that it only does it
18:21
for a look ahead, but it's not really done yet, right? It's, oh, they've given up even, okay. Okay, so I think the answer for the latter one is no. I don't think there's a lot of good things right now. The good thing, of course, is that once your GPU is doing all the pixel pushing, your CPU is free to handle the other work.
18:43
Yes, another question in the back. Yeah, how is chaining working? Are you really chaining the effects in a program like a shader, or do you chain them by rendering to target and texture from this? They are mainly chained as in the program,
19:01
most of it is run within one program. There are a few cases where you need to split the chain. One of them is when you do something like a blur where you sample like 15 different pixels right there, then you want to sample from memory. You don't want every single computation to be run 15 times, right? There are some other few cases where you need to bounce to memory,
19:21
and that is especially when you have a split in your chain, where it needs to be used by two different things. So again, I'll just go back to one of the earlier slides here. It's not so early slide actually. Here, right, if you look at the colors here, you can see the three first ones are green. Those are part of the same shader,
19:40
and that's the input, the gamma expansion, and one of the, and actually some alpha handling, because we handle alpha correctly as well. So in general, we try to combine them into one shader as much as we can, and we really rely on the shader compiler to optimize and inline things away as much as we can. I would love for the Intel compiler to be like smarter about the things
20:00
I hope it could be smarter for, but I assume there are people in the room here who can help me with that. There are more questions both in the front and the back. Good thing about not controlling the microphone is I don't have to make these decisions. I'll be coming, don't worry. I've got two questions.
20:21
The first is, do you support 10 and 12-bit video? Yes. Well, let me elaborate a bit on this. iMovit mostly uses 16-bit floating point support internally. This is not enough for doing 12-bit video with full accuracy. It is enough for doing 10-bit. We have accuracy tests in most of the crucial effects
20:42
to check that we actually get within the right pixel level. So yes, it is supported, and this is clearly something I want to do for 12-bit. We would have to go to 32-bit floating point. This is not hard in itself, but I think open source things don't really support it all that much yet, but it's certainly something I would love to support,
21:01
and it's very close to there because we're working in full floating point all the time. We don't have these eight-bit locks. The other question was if you did video decode on the GPU, is it a zero-copy operation from memory to, is there a trip to the CPU and back to use with it? No, no, no. If you do video decoding on GPU, you can certainly put that into texture
21:21
with whatever extension you have available, and then we can use it as an input. Again, KDNlive doesn't currently do this, but it would certainly be possible to do all the way without ever touching the CPU. There is a VDPo extension for that, VDPo OpenGL Compat. Yes, and there's also, of course, now a VDPo emulator for VA API, so you can run VDPo things
21:42
against your input card. I've never tested it, but you want something like this, right? If you look at a demo here now, when it's doing a fade, it's CPU bound. It's my CPU not able to decode more than two streams at 40-something FPS. So first, some comments on the previous question
22:00
about Gstreamer, so I think this would be really fun to implement, and maybe I'm going to look at that really soon. It was very fun in MLT. And, well, then on Intel, you have with VA API the possibility to do the decoding on the GPU, and then get textures from that, and in that case, you should also be able to have zero-copy processing directly here.
22:23
But now to my question, what's the input and output that you can have with your toolkit? Can you have GL textures as input and output, and can you also have system memory as input and output? I can tell you, I think, so MoveIt is plugin-based in a sense, right? You have a class that defines an input and an output,
22:42
and you have effects, of course, in the middle. There are certain, there is an input class that takes in a system memory, and then, of course, uploads it to a texture and uses it from there. I don't actually think I have texture input, but this is trivial to port. This is like 10 lines of support. It's not a problem at all. As for rendering, MoveIt renders to whatever FBO you give it.
23:02
So you can either render, well, you can either render to the screen, that's FBO zero, or you can render to another FBO and then do a readPixels down into the CPU. So input and output there is sort of free. Okay, thank you. Any more questions? We have like 40 minutes to fill or something, right?
23:22
Yeah. I have to fill it, right? There's not a question down here. And what are your software requirements to make MoveIt working?
23:41
You spoke about VDPO or? No, no, no. So my software requirements are, in a sense, very low. You need a C++ compiler. You need the eigenmatrix library. You need OpenGL. And by OpenGL, I mean any card that can do like shaders, which means any card from like late, like eight or nine years ago. So there are no where to,
24:01
it certainly works on both Intel API and media, for instance. It works, I know people have it working on both Linux, Windows, OS X. I try to stick with like, again, we're 2014, it should be allowed to use C++, right? But this is sort of the where requirement, in a sense. What about other video editors, such as OpenShot?
24:22
Are they likely to include that? OpenShot uses MLT, I think. Yes. So it means that soon, what you need to do to use this MLT, you need an OpenGL context. Just you need initialized OpenGL so I can play with it. Apart from that, it's completely transparent to you. MLT now handles all the upload and download thing
24:42
transparently and chains things together. This was done by Dan Benavie, who was like one of the MLT guys. I don't know if he might be here or not, I don't think so. But in general, it's easy to get to start with. If you want to play with it, you can go to Shotcat, which also has like full native support and has for a while. Thanks so much.
25:09
Again, any more questions? Okay, another one. You said you need OpenGL with shaders. What about OpenGL ES, OpenGL ES2 or something?
25:23
This is a good question. I don't support the GLS right now. I'm not sure if I actually will. It depends a bit on what the requirements are, right? What do you want to use it for? Because editing video on your mobile is maybe not a use case that I care that much about to begin with. I mean, I try very hard to do only one thing, right?
25:42
And do it well. And instead of trying to fragment myself, if you have a compelling use case for GLS, like which is, for those who don't know, like the mobile newer modern version of OpenGL, I might consider it, but I don't want to maintain a fork, right? But I don't really require any magic things.
26:01
I only need to draw a quad and have a shader on it. I've been playing a lot with robots and I had many issues by doing some video understanding, I mean, finding a shape, finding a color or whatever. Do you think that movie can be used to,
26:20
I mean, using some filter to maybe only select the red things on an image and then being able to locate some shapes or some, it could be an extension, very interesting in the robotic area. I think my answer here will have to be no. Just because this is, again, sort of deviating from what I want to do and there's only one of me, right?
26:40
If you want to, you can most certainly use movie and write your own effects, but I'm not really sure what you would gain from it. Most likely if you have that kind of specialized computer vision things, right? Maybe you don't really want what movie gives you. Maybe you just want to have your own shader. You don't want a blur, right? You want some sort of specialized image recognition thing.
27:03
So then maybe movie is not for you. This is really about, for me at least, this is really about open source graphics. I think we are out of questions. Well, thank you then. Thanks.
27:43
So the next presentation is going to be about the offloading of video on Wayland.
59:14
Speak. Do you hear me? Cool. Okay. Then we can put that again.
59:22
And wait a second.