How to Build an Open Source Embedded Video Player
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 611 | |
Autor | ||
Lizenz | CC-Namensnennung 2.0 Belgien: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/42094 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produktionsjahr | 2017 |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
FOSDEM 2017557 / 611
10
14
15
16
17
21
22
24
25
27
31
36
40
42
46
50
55
56
63
70
73
78
84
94
101
102
104
107
108
109
110
111
112
113
114
115
117
119
122
123
126
127
128
130
131
132
135
136
137
138
141
142
144
145
146
150
151
157
158
159
160
162
163
164
166
170
171
173
175
176
177
179
181
184
187
189
191
193
194
199
200
205
207
208
209
211
214
218
219
222
223
224
225
226
229
230
232
234
236
237
238
239
245
248
249
250
251
253
255
257
258
259
260
261
264
265
266
267
268
271
272
275
277
279
280
282
283
284
287
288
290
292
293
297
302
304
305
306
307
309
310
311
312
313
314
316
317
318
319
321
322
327
329
330
331
333
336
338
339
340
341
346
348
349
350
352
354
356
358
362
363
364
367
371
372
373
375
380
384
385
386
387
388
389
390
391
392
393
394
395
398
400
401
402
405
407
409
411
412
413
416
417
418
420
424
425
427
428
429
431
435
438
439
440
441
443
444
446
448
454
459
460
461
462
465
466
468
471
473
477
478
480
483
487
488
489
491
495
498
499
500
501
502
503
504
507
508
510
511
512
514
518
519
520
522
524
526
528
530
531
533
535
536
549
550
554
555
558
560
563
564
573
575
578
579
582
585
586
588
589
590
591
592
593
594
595
596
600
603
604
605
609
00:00
Streaming <Kommunikationstechnik>CodierungOpen SourceSharewareGruppenoperationHardwareGeradeInteraktives FernsehenSoftwareSystemprogrammierungHomepageQuellencodierungDatensichtgerätTouchscreenBenutzeroberflächesinc-FunktionKartesische KoordinatenSystemplattformMereologieGraphische BenutzeroberflächeÄußere Algebra eines ModulsReverse Engineeringp-BlockHypermediaMeterBinärcodeWeb SiteMinkowski-MetrikWürfelFormale SpracheMensch-Maschine-SchnittstelleVersionsverwaltungRoutingKernel <Informatik>Ein-AusgabeElektronische PublikationFunktion <Mathematik>YouTubeDigitale PhotographieGraphikprozessorWeb-SeiteBildschirmfensterProgrammbibliothekOffene MengeBitPlastikkarteSoftwarewartungComputeranimation
06:56
Plug inFunktion <Mathematik>DruckertreiberKonfigurationsraumBildschirmfensterDifferenteStreaming <Kommunikationstechnik>p-BlockSynchronisierungTouchscreenElement <Gruppentheorie>SystemprogrammierungMensch-Maschine-SchnittstelleKernel <Informatik>HardwareElektronische PublikationSharewareDatensichtgerätOpen SourceMechanismus-Design-TheorieBinärcodeFirmwareCodierungSoftwarewartungSoftwareDateiverwaltungBitrateAnpassung <Mathematik>UnternehmensarchitekturBenutzeroberflächeMereologieDateiformatFormation <Mathematik>Singularität <Mathematik>ProgrammierungMagnetbandlaufwerkHalbleiterspeicherSoftwaretestTesselationLesen <Datenverarbeitung>WellenlehreSurjektivitätComputeranimation
13:47
Computeranimation
Transkript: Englisch(automatisch erzeugt)
00:05
Okay, my name is Michael and I'm working in the graphics team of Pangotronics and today I will tell you something about embedded video playback systems and how to build them using open source. So what's an embedded video playback system?
00:23
We have a screen in your airplane, you want to watch movies there, you need some video playback for that. You can do other stuff on that but mainly you want to watch movies. Or while driving a car, you want to watch movies or look at the route but you want to
00:41
watch movies, of course. Or here we have an example of a smart TV, for example you can put it into a museum and show some videos which explain more details to the current stuff you are looking at. So that's what I mean by an embedded video playback system.
01:04
Furthermore, I will start with reducing the features because the previous systems are doing more stuff than playing videos but we will only focus on video stuff. Then have a look at the status quo. If you go to a vendor website, what do you get there?
01:20
Then how we can do all this using open source. And then I have a short glimpse into the future of what might be next steps where we can work and improve everything. So the features. I drew a small mockup of the application we are going to build. On the left hand side you see a user interface.
01:41
We have videos A, B, C and D and they are playing a short preview of the video that you can and then as a user you just select one of the videos and it's played back in full screen on your whole display. And of course you want to have open GL acceleration for all of that to make it responsive and
02:04
to because we are on an embedded system we just need it. So for the system we are using an iMX6 sock which is built by Freescale and the sock
02:20
features a chips and media coder, video decoder and we want a GALP 3000 GPU at least in the plus platform. So if you are using something before the plus variant you have we want a GC2000 there. On top of this sock features we need a driver for the coder decoder and open GL driver
02:49
for our GPU and on top to implement the actual features we have some video input so our files then have some software to drive the driver and controller decoding then we have
03:05
some graphical user interface as I said before which in turn uses the open GL and then sends the whole output to some display. So that's the system we are going to build here.
03:22
So the first step is go to your vendor's homepage and download some BSP. You get a usually Yocto package BSP with a Linux kernel user space libraries so basically everything you need.
03:40
The Linux kernel you get on the vendor's page is usually really old so for example on the iMX6 you either get a 3.14 or a 4.1 so you don't really want to use that anymore. What's even worse for the GPU and the video decoding we get only binary block drivers
04:04
So we cannot look at the source code, we cannot really debug it, we cannot fix it so and that's for the core parts of our system, nah I'm not sure if you want to do that and these impose obstacles for debugging and the maintenance of our system.
04:22
So can we do everything without these blobs and just use open source software from upstream? So let's look at our system, start at the user interface, we're using QML for that. It's a language that is a bit similar to HTML, it's pretty easy to define user interfaces
04:44
with that, it uses Qt in the background and because of that it can use OpenGL for the acceleration of compositing and for example we have a demo built with that based on our mock-up before, you can see a photo of the demo here or the demo in
05:04
action up in front, I hope it's plain but I guess. The whole application consists of 150 lines of QML code so that's really very little code and has some interaction, some features so that's impressive and for the actual
05:24
demo we need 200 more lines of C++ code which is necessary to control the video so we can stop the video and mute the videos so that has to be done in C++. So I said before we are using OpenGL for that, the OpenGL
05:49
driver is usually from Vivante and is a blob driver, we have an alternative here and the ethnoviv driver which is a reverse engineered driver for the
06:02
Vivante GPU, it's available upstream in Mesa since version 17 and at Linux since Linux 4.5, it implements OpenGL of course and therefore we can just use it from Qt and we can composite our user interface and
06:22
especially the video frames we have in hardware which is really usable because we don't want to do that copying in software. So now comes the problem, how do we get the video frames into the ethnoviv driver and
06:45
for that there is no solution in Gstream upstream yet. We wrote this ourselves as the GST video item you see down here and it does a zero copy import from Gstreamer to QML or ethnoviv using DMA handles in the
07:07
system and yeah so this is one very important part I like to emphasize once more so we do not need to copy when we go from Gstreamer to QML and
07:23
then we have some auto plugin which is a mechanism in Gstreamer to build up the pipeline and very simplified it looks like this we have a file source for reading at the files from our file system then we do some demaxing for
07:40
getting it from a container format to the raw stream some parsing and then we need a decoder for our video data and we also want to do that in hardware and don't want to do that in software so what we what can we use for that. There is a coder driver in the Linux kernel it's the config item video
08:11
coder and you enable it and if you're running on an iMX6 which and everything is configured correctly you will see a depth video X device
08:22
node for this from the driver. This implements our video for Linux mem2mem device and fortunately for these devices we have an element for Gstreamer so we use this Gstreamer element this uses the kernel driver
08:45
and everything magically works. Then Freescale or now NXP put some customizations on their stock these are implemented in the IPU and do some for
09:01
example some untyling of the output of the coder for the actual scan out on the on the display so drivers for that are about to be mainlined in the Linux kernel and that's pretty nice. Unfortunately on the coder we still
09:23
have closed source firmware so we cannot really look into the coder we have to take a firmware uploader to the driver but maybe someone wants to write a firmware for that but it's not there yet. So if we go back to the system
09:41
architecture we have again our stock down here let's start with the video files the video files go into Gstreamer Gstreamer uses the video for Linux coder and driver in the Linux kernel to use the hardware decoder then we use
10:02
the zero copy sync to jump over to Qt which uses Mesa and ethnoVIF for compositing the user interface and the video frames and then forwards everything to the display. So everything in here is open source so what next we
10:32
have to find an upstream solution for the Gstreamer to ethnoVIF interface so basically the GST video item you saw before. We might use other
10:44
compositors instead of QML and QT for example some Wayland compositor and another idea is to use adaptive streaming with different bit rates and different video files for the preview video and the full screen video so that
11:04
we can play different video qualities there. So I'm already at the conclusion. I first looked at the binary block drivers and the issues
11:28
with debugging and maintenance of the binary block drivers and the vendor kernels. Then I showed how to build a user interface with ethnoVIF and QML
11:40
using open source. Then we looked at the video decoding which is done by Gstreamer using the video for Linux coder driver and I had a short glimpse into future work using Wayland and adaptive streaming. So as a
12:05
conclusion I showed that embedded video playback does not require block drivers anymore. With that I'd like to thank you all for your attention and if you want to have a look at the actual hardware and demo that's up front
12:20
you can come here and play around with it. Thank you. So if there are any questions feel free to ask or come to me.
12:50
So the question was or the remark was that there is a QML sync and I wasn't
13:01
aware of that sync but I'm not sure if it uses the zero copy and an ETL image upload. Okay. So thank you again and yeah if you have any further questions come up to
13:37
the front. Thank you.