We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How to Build an Open Source Embedded Video Player

00:00

Formale Metadaten

Titel
How to Build an Open Source Embedded Video Player
Serientitel
Anzahl der Teile
611
Autor
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produktionsjahr2017

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Video playback for embedded devices such as infotainment systems and mediacenters demands hardware accelerators to achieve reasonable performance.Unfortunately, vendors provide the drivers for the accelerators only as binaryblobs. We demonstrate how we built a video playback system that uses hardwareacceleration on i.MX6 by using solely open source software includingGstreamer, Qt QML, the etnaviv GPU driver, and the coda video decoder driver. While hardware accelerators are necessary to provide reasonable performancefor video playback on embedded devices, the drivers that are provided asbinary blobs by hardware vendors cause a lot of problems. They are linked tospecific versions of the Linux kernel, may contain security and performanceissues, and are pretty much impossible to debug by developers trying to builda system based on these drivers. We built an i.MX6 based embedded system that simultaneously decodes anddisplays four videos. Our system solely uses open source drivers to controlthe available hardware accelerators. The GUI consists of a Qt application based on QML. Using Qt and QML allows usto use OpenGL for compositing the user interface. OpenGL is backed by the opensource etnativ GPU driver and the Vivante GPU. The Qt application receives the video streams from a Gstreamer pipeline (usingplaybin). The Gstreamer pipeline contains a v4l2 decoder element, which usesthe coda v4l2 driver for the CODA 960 video encoder and decoder IP core (VPUin the Freescale/NXP Reference Manual), and a sink element to make the framesavailable to the Qt application. The entire pipeline including the Gstreamer to Qt handover uses dma_bufs toavoid copies in software. This example shows how to use open source drivers to ease the development ofvideo and graphics applications on embedded systems.
17
Vorschaubild
24:59
109
Vorschaubild
48:51
117
Vorschaubild
18:37
128
146
Vorschaubild
22:32
162
Vorschaubild
23:18
163
Vorschaubild
25:09
164
Vorschaubild
25:09
166
Vorschaubild
24:48
171
177
181
Vorschaubild
26:28
184
Vorschaubild
30:09
191
Vorschaubild
25:08
232
Vorschaubild
39:45
287
292
Vorschaubild
25:14
302
Vorschaubild
26:55
304
Vorschaubild
46:54
305
314
317
321
Vorschaubild
18:50
330
Vorschaubild
21:06
333
Vorschaubild
22:18
336
Vorschaubild
24:31
339
Vorschaubild
49:21
340
Vorschaubild
28:02
348
Vorschaubild
41:47
354
Vorschaubild
26:01
362
Vorschaubild
18:56
371
Vorschaubild
13:12
384
385
Vorschaubild
25:08
386
Vorschaubild
30:08
394
Vorschaubild
15:09
395
411
Vorschaubild
15:10
420
459
473
Vorschaubild
13:48
483
501
Vorschaubild
32:59
502
Vorschaubild
14:48
511
518
575
Vorschaubild
25:39
590
Vorschaubild
25:00
592
Vorschaubild
23:32
Streaming <Kommunikationstechnik>CodierungOpen SourceSharewareGruppenoperationHardwareGeradeInteraktives FernsehenSoftwareSystemprogrammierungHomepageQuellencodierungDatensichtgerätTouchscreenBenutzeroberflächesinc-FunktionKartesische KoordinatenSystemplattformMereologieGraphische BenutzeroberflächeÄußere Algebra eines ModulsReverse Engineeringp-BlockHypermediaMeterBinärcodeWeb SiteMinkowski-MetrikWürfelFormale SpracheMensch-Maschine-SchnittstelleVersionsverwaltungRoutingKernel <Informatik>Ein-AusgabeElektronische PublikationFunktion <Mathematik>YouTubeDigitale PhotographieGraphikprozessorWeb-SeiteBildschirmfensterProgrammbibliothekOffene MengeBitPlastikkarteSoftwarewartungComputeranimation
Plug inFunktion <Mathematik>DruckertreiberKonfigurationsraumBildschirmfensterDifferenteStreaming <Kommunikationstechnik>p-BlockSynchronisierungTouchscreenElement <Gruppentheorie>SystemprogrammierungMensch-Maschine-SchnittstelleKernel <Informatik>HardwareElektronische PublikationSharewareDatensichtgerätOpen SourceMechanismus-Design-TheorieBinärcodeFirmwareCodierungSoftwarewartungSoftwareDateiverwaltungBitrateAnpassung <Mathematik>UnternehmensarchitekturBenutzeroberflächeMereologieDateiformatFormation <Mathematik>Singularität <Mathematik>ProgrammierungMagnetbandlaufwerkHalbleiterspeicherSoftwaretestTesselationLesen <Datenverarbeitung>WellenlehreSurjektivitätComputeranimation
Computeranimation
Transkript: Englisch(automatisch erzeugt)
Okay, my name is Michael and I'm working in the graphics team of Pangotronics and today I will tell you something about embedded video playback systems and how to build them using open source. So what's an embedded video playback system?
We have a screen in your airplane, you want to watch movies there, you need some video playback for that. You can do other stuff on that but mainly you want to watch movies. Or while driving a car, you want to watch movies or look at the route but you want to
watch movies, of course. Or here we have an example of a smart TV, for example you can put it into a museum and show some videos which explain more details to the current stuff you are looking at. So that's what I mean by an embedded video playback system.
Furthermore, I will start with reducing the features because the previous systems are doing more stuff than playing videos but we will only focus on video stuff. Then have a look at the status quo. If you go to a vendor website, what do you get there?
Then how we can do all this using open source. And then I have a short glimpse into the future of what might be next steps where we can work and improve everything. So the features. I drew a small mockup of the application we are going to build. On the left hand side you see a user interface.
We have videos A, B, C and D and they are playing a short preview of the video that you can and then as a user you just select one of the videos and it's played back in full screen on your whole display. And of course you want to have open GL acceleration for all of that to make it responsive and
to because we are on an embedded system we just need it. So for the system we are using an iMX6 sock which is built by Freescale and the sock
features a chips and media coder, video decoder and we want a GALP 3000 GPU at least in the plus platform. So if you are using something before the plus variant you have we want a GC2000 there. On top of this sock features we need a driver for the coder decoder and open GL driver
for our GPU and on top to implement the actual features we have some video input so our files then have some software to drive the driver and controller decoding then we have
some graphical user interface as I said before which in turn uses the open GL and then sends the whole output to some display. So that's the system we are going to build here.
So the first step is go to your vendor's homepage and download some BSP. You get a usually Yocto package BSP with a Linux kernel user space libraries so basically everything you need.
The Linux kernel you get on the vendor's page is usually really old so for example on the iMX6 you either get a 3.14 or a 4.1 so you don't really want to use that anymore. What's even worse for the GPU and the video decoding we get only binary block drivers
So we cannot look at the source code, we cannot really debug it, we cannot fix it so and that's for the core parts of our system, nah I'm not sure if you want to do that and these impose obstacles for debugging and the maintenance of our system.
So can we do everything without these blobs and just use open source software from upstream? So let's look at our system, start at the user interface, we're using QML for that. It's a language that is a bit similar to HTML, it's pretty easy to define user interfaces
with that, it uses Qt in the background and because of that it can use OpenGL for the acceleration of compositing and for example we have a demo built with that based on our mock-up before, you can see a photo of the demo here or the demo in
action up in front, I hope it's plain but I guess. The whole application consists of 150 lines of QML code so that's really very little code and has some interaction, some features so that's impressive and for the actual
demo we need 200 more lines of C++ code which is necessary to control the video so we can stop the video and mute the videos so that has to be done in C++. So I said before we are using OpenGL for that, the OpenGL
driver is usually from Vivante and is a blob driver, we have an alternative here and the ethnoviv driver which is a reverse engineered driver for the
Vivante GPU, it's available upstream in Mesa since version 17 and at Linux since Linux 4.5, it implements OpenGL of course and therefore we can just use it from Qt and we can composite our user interface and
especially the video frames we have in hardware which is really usable because we don't want to do that copying in software. So now comes the problem, how do we get the video frames into the ethnoviv driver and
for that there is no solution in Gstream upstream yet. We wrote this ourselves as the GST video item you see down here and it does a zero copy import from Gstreamer to QML or ethnoviv using DMA handles in the
system and yeah so this is one very important part I like to emphasize once more so we do not need to copy when we go from Gstreamer to QML and
then we have some auto plugin which is a mechanism in Gstreamer to build up the pipeline and very simplified it looks like this we have a file source for reading at the files from our file system then we do some demaxing for
getting it from a container format to the raw stream some parsing and then we need a decoder for our video data and we also want to do that in hardware and don't want to do that in software so what we what can we use for that. There is a coder driver in the Linux kernel it's the config item video
coder and you enable it and if you're running on an iMX6 which and everything is configured correctly you will see a depth video X device
node for this from the driver. This implements our video for Linux mem2mem device and fortunately for these devices we have an element for Gstreamer so we use this Gstreamer element this uses the kernel driver
and everything magically works. Then Freescale or now NXP put some customizations on their stock these are implemented in the IPU and do some for
example some untyling of the output of the coder for the actual scan out on the on the display so drivers for that are about to be mainlined in the Linux kernel and that's pretty nice. Unfortunately on the coder we still
have closed source firmware so we cannot really look into the coder we have to take a firmware uploader to the driver but maybe someone wants to write a firmware for that but it's not there yet. So if we go back to the system
architecture we have again our stock down here let's start with the video files the video files go into Gstreamer Gstreamer uses the video for Linux coder and driver in the Linux kernel to use the hardware decoder then we use
the zero copy sync to jump over to Qt which uses Mesa and ethnoVIF for compositing the user interface and the video frames and then forwards everything to the display. So everything in here is open source so what next we
have to find an upstream solution for the Gstreamer to ethnoVIF interface so basically the GST video item you saw before. We might use other
compositors instead of QML and QT for example some Wayland compositor and another idea is to use adaptive streaming with different bit rates and different video files for the preview video and the full screen video so that
we can play different video qualities there. So I'm already at the conclusion. I first looked at the binary block drivers and the issues
with debugging and maintenance of the binary block drivers and the vendor kernels. Then I showed how to build a user interface with ethnoVIF and QML
using open source. Then we looked at the video decoding which is done by Gstreamer using the video for Linux coder driver and I had a short glimpse into future work using Wayland and adaptive streaming. So as a
conclusion I showed that embedded video playback does not require block drivers anymore. With that I'd like to thank you all for your attention and if you want to have a look at the actual hardware and demo that's up front
you can come here and play around with it. Thank you. So if there are any questions feel free to ask or come to me.
So the question was or the remark was that there is a QML sync and I wasn't
aware of that sync but I'm not sure if it uses the zero copy and an ETL image upload. Okay. So thank you again and yeah if you have any further questions come up to
the front. Thank you.