What's new in GStreamer?
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 644 | |
Autor | ||
Lizenz | CC-Namensnennung 2.0 Belgien: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/41740 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
00:00
GenerizitätRechenschieberMetropolitan area networkComputeranimationVorlesung/Konferenz
00:30
DatenerfassungVideokonferenzWeb logEinfacher RingPufferspeicherRTSPMagnetbandlaufwerkDatenkompressionWurm <Informatik>PunktFehlermeldungp-BlockStreaming <Kommunikationstechnik>BitProzess <Informatik>MultiplikationGebäude <Mathematik>Dreiecksfreier GraphDiffusorCodierung <Programmierung>BildschirmfensterZentrische StreckungHypermediaUmsetzung <Informatik>Framework <Informatik>AuszeichnungsspracheEinfacher RingCodecKartesische KoordinatenStandardabweichungEinfach zusammenhängender RaumMaßerweiterungService providerMultiplikationsoperatorClientMultimediaAuflösung <Mathematik>Puffer <Netzplantechnik>SoftwareentwicklerIntegralCASE <Informatik>BrowserOffene MengeHilfesystemMAPPlug inVideokonferenzHardwareProgrammbibliothekServerSoftwarewartungElektronischer FingerabdruckRechenschieberZweiBildschirmmaskeGamecontrollerZusammenhängender GraphRechter WinkelDämpfungLastElektronische PublikationRahmenproblemBandmatrixSchreiben <Datenverarbeitung>Heegaard-ZerlegungSynchronisierungSchnittmengeDatensatzFigurierte ZahlQuick-SortOrdnung <Mathematik>RandwertProfil <Aerodynamik>VariableInhalt <Mathematik>Lipschitz-StetigkeitUnendlichkeitEinhüllendeATMExogene VariableLeckFormation <Mathematik>MetadatenPhysikalisches SystemEin-AusgabeHumanoider RoboterInformationZeitstempelCodierungLoginEntscheidungstheorieDateiformatURLProgrammierumgebungGenerator <Informatik>Stabilitätstheorie <Logik>Keller <Informatik>Computeranimation
10:14
Plug inBimodulCOMDesintegration <Mathematik>HIP <Kommunikationsprotokoll>W3C-StandardLaufzeitsystemVirtual Home EnvironmentGebäude <Mathematik>SchnelltasteStreaming <Kommunikationstechnik>ServerSchreiben <Datenverarbeitung>Ein-AusgabeProgrammbibliothekStreaming <Kommunikationstechnik>ServerDifferenteVererbungshierarchieProdukt <Mathematik>HalbleiterspeicherSoftwareHypermediaEndliche ModelltheorieQuick-SortRenderingShape <Informatik>Plug inCodierung <Programmierung>ViewerMusterspracheCodeClientVideokonferenzVersionsverwaltungSchnelltasteCodecIntegralGüte der AnpassungCASE <Informatik>Motion CapturingMehrrechnersystemAbstraktionsebenePhysikalisches SystemMultiplikationsoperatorSoftwareentwicklerBimodulOffene MengeFirewallGamecontrollerOpen SourceBrowserSoftwarewartungWeb logProgrammierungMapping <Computergraphik>Demo <Programm>ÄhnlichkeitsgeometrieHardwareSynchronisierungSystemprogrammierungBitHardwarebeschreibungsspracheMereologieSpeicherabzugTDMAFokalpunktMultiplikationFramework <Informatik>Data MiningGlobale OptimierungQuellcodeBenutzerbeteiligungStabilitätstheorie <Logik>Selbst organisierendes SystemRechter WinkelElement <Gruppentheorie>Formale SpracheDokumentenserverPunktEinsSchreiben <Datenverarbeitung>Gleitendes MittelTravelling-salesman-ProblemProgrammierspracheComputersicherheitWebcamLokales MinimumKlasse <Mathematik>HilfesystemInhalt <Mathematik>MP3Computeranimation
19:48
InterprozesskommunikationHypermediaMereologieSISPProgrammbibliothekTouchscreenElement <Gruppentheorie>Protokoll <Datenverarbeitungssystem>Rechter WinkelDimensionsanalyseStreaming <Kommunikationstechnik>RechenschieberCASE <Informatik>BenutzerbeteiligungProdukt <Mathematik>Vorlesung/Konferenz
21:43
SoftwareSoftwareentwicklerQuelle <Physik>MultiplikationsoperatorEinbettung <Mathematik>E-MailQuick-SortTwitter <Softwareplattform>FreewareWeb logHash-AlgorithmusMailing-ListeSpezifisches VolumenStreaming <Kommunikationstechnik>Computeranimation
22:38
BenutzerbeteiligungDienst <Informatik>Flussdiagramm
Transkript: Englisch(automatisch erzeugt)
00:07
So, we continue with the next talk, What's New in Gstreamer, so a very generic talk on what's going on there, by Tim Philip-Muller. Hi, everybody.
00:21
Thanks for coming. Let's talk about Gstreamer. I've got way too many slides, so I'll, you know, be very quick. It's going to be a very high-level talk, and there'll be more talks. There's another talk by Olivier later, and then another Gstreamer talk afterwards. Quick introduction. Who am I? I've been hacking on Gstreamer for a while.
00:42
I, oh, there's a slide missing. Anyway, I'm Tim Miller. I'm one of the main Gstreamer maintainers and developers. I've been doing this for 10 years, and I work for Centricular to help customers with Gstreamer. What is Gstreamer? Just in case you're coming in to have a peek, basically, it's a general framework
01:04
for multimedia processing. We're trying to provide building blocks that you can combine, you know, for reading, for streaming, downloading, encoding, decoding, maxing, RTP payloading, et cetera. And you can combine these freely to do whatever you want with your media flows, basically.
01:23
We aim to be cross-platform, so, you know, Windows, Android, iOS, Linux, of course, embedded systems. We want to, we have toolkit agnostics. We use GLib, but, you know, we also use QT and have integration for everything you
01:42
want, really. We don't tie you down for anything. We want to support all and any use cases, from editing to streaming to playback to recording, and there's a set of libraries that we provide and plugins, and then you can write stuff on top. We provide a very abstract and very flexible API that allows you to do, hopefully, everything.
02:03
And we don't reinvent everything from scratch, but we build on other libraries and other components, of course. We have low-level API to give you full control. We've got some high-level convenience API for things like playback and coding and video editing and stuff like that.
02:21
We have an RTSP server library that lets you easily stream stuff over RTSP, all this kind of stuff. And we don't try to make unreasonable assumptions, so you can use it everywhere. You can integrate it in your browser. You can use it in OpenGL applications. You know, whatever you want, you should be able to use Gstreamer with it.
02:44
All right, so what have we been up to? Releases. The last release has been some time. It was 1.12 in May. That's our current stable release. We were aiming for a six-month cycle, but decided to delay the upcoming one for a little
03:02
bit. So 1.14 is going to come out really soon now, basically. We're going to probably make pre-releases next week, and hopefully later this month or early next month, we'll have 1.14 out. That's our next stable feature release, basically. So what has landed? What's interesting, possibly, we have video conversion and video scaling can now sound
03:24
multi-threaded, so if you've got high-resolution content, you can just spread it over your course very nicely. Timed text markup language, TTML, there's a new plugin for that. This is quite nice. It's a new standard.
03:41
And it has a potential to describe text, subtitles in general, and text markup, so I'm quite excited about that. So we have a plugin for that. It's not enabled for auto-plugging it. You have to set an environment variable. But yeah. And it supports the basic profile, at least.
04:01
SplitMarkSync. SplitMarkSync is something that fragments your media stream, so you can say, well, you know, create a new file every so-and-so many megabytes or gigabytes or so-and-so many seconds, minutes, hours, whatever, and it just takes your encoded streams and it will split it. And it works with any container, even those that don't support that natively.
04:22
So you can just decide Matroska, MP4, whatever you want. And it'll just work. So that has been rewritten to be more deterministic now. And it should be more stable. And there's a new format location full signal, which allows you to get the first buffer
04:40
of a new fragment when it starts. So you can then read the metadata on it and get timestamps and any other information from it. You know, if you want to have special file names, we'll do something in response to that. Dash trick mode playback. That also landed. That's quite some work.
05:00
It's not entirely trivial because you need to stay within the envelope of the bandwidth. I mean, you don't have infinite download bandwidth, so you need to skip key frames. You need to find out where the key frames are. You need to skip segments. You need to figure out how can you utilize full bandwidth, but not more, and still squeeze
05:21
as many frames out of it as possible. So that landed. That works quite nicely with certain dash streams. We have loads of new features and performance improvements on embedded, which I'm not going to talk about. We have video for Linux, OMX, DMA buff, zero copy, all that stuff.
05:42
Olivier has a talk right after this one, and he's going to tell you everything about it. So I'm just going to skip all that. Hardware accelerated video encoding and decoding. We have lots of that, of course. We have a new MSDK plugin for Intel's Media SDK, which provides video encoding and decoding
06:02
on Intel hardware. That works on Linux as well as on Windows. There's GTMA VAPI, which exists already. That is based on an open-source stack, but that only works on Linux, and that also has seen loads of new features and fixes, and the encoders are now auto-plugged, so
06:22
they work quite nicely. We have a new NVDeck plugin, which is for the NVIDIA graphics stack, basically, and we already had an encoder for that, which has a few new features. Yeah, what else? What's coming up?
06:41
Something that just landed is the AO Media AV1 support. AV1 is basically the next generation video codec, hopefully better than H.265, and it's going to be royalty-free, and it's an open standard. Tim Terrybury is going to talk about it at 5 p.m. later today, but I'm really
07:01
excited about that. Basically, once we have that, and it's going to be widely deployed, it's going to be widely supported, Apple just joined the foundation, AO Media. It's just going to work. At that point, we will have cutting-edge audio codec, Opus, and a video
07:20
codec, hopefully, so that's going to be great. We can ditch all the MPEG nonsense. The codec is still experimental. I think the bitstream either just has been stabilized or might be about to be stabilized. Go to Tim's talk. The encoding is still very, very slow, but we have the integration, so you can start playing with it if you like.
07:44
There's a new plugin called IPC Pipeline, which allows you basically to split Gstreamer pipelines over multiple processes, and again, Olivier is going to talk about all that in his talk. We have something called a ring buffer for debug logs. The thing is, if you enable debugging in Gstreamer, we log so much stuff, you can
08:06
easily accumulate hundreds of megabytes and gigabytes of debug logs, but sometimes people have problems. They're like, well, after three days of streaming, I run into this error, and then you can't just really make a debug log. We now have a ring buffer for debug logs, and when you find a problem, you
08:21
can just grab the last megabytes out of it. That's quite nice. It's really simple to do, but no one has actually done it. We have a tracing framework, and that has seen quite a few improvements. We have a leak tracer, in particular, that works on embedded systems as well, so it doesn't have the, that grind is nice on the desktop, but if you have
08:43
something that has much less overhead, that's much nicer. The leak tracer, it can do stack tracers, of course. We can do snapshotting now. We can figure out better where actually latency in your pipeline comes from
09:05
without, you know, digging through the debug logs in too much detail. HLS-Sync 2, we had HLS-Sync. You feed it an MPEG-TS stream, which is not always convenient. HLS-Sync 2, basically, you feed it elementary stream, so you give it an
09:20
encoded video stream and an encoded audio stream, and it will do the splitting and maxing for you. It will use split maxing internally, but it will work much nicer with content that is already encoded. HLS-Sync kind of relies on an encoder up front, so it can force keyframes at the boundaries. HLS-Sync will work without that, so that's nicer.
09:42
That's the use case you need. RTSP, we have an RTSP server library and a client, of course, so you can easily stream, you know, streams of RTSP with very little effort, and it's used in security cameras and whatnot. RTSP 2 support has just landed, and I believe we might be the first one
10:02
to implement that. And that's also on the audio back channel, which is a horrible extension to the standard to allow you to basically send audio back over a playback RTSP connection. That's coming up soon. In general, we have a mission to, we have our plug-ins split into
10:22
multiple modules, and, well, they're called base, good, ugly, bad. And people don't like, you know, bad. They say, do you see plug-ins bad? It's kind of an inside joke, but they're kind of worried. And we're not really putting enough effort into moving things from bad into good or base.
10:41
So we're kind of trying to change that and consolidating that. And usually we add new stuff in bad until the API is stable and we're, you know, we like it, and then we move it over. But we just don't, you know, haven't been good enough about that. So we're making an effort to move more stuff into our core modules. So the good thing is MP3 patterns have expired, which means we can
11:02
move MP3 decoders and encoders, and MP2 encoder as well. We can move that into good, and we've done that, which is nice. AC3 patterns have also expired. Unfortunately, we can't move the decoder because it's GPL licensed and we don't do GPL in our core, in our good modules, basically.
11:24
What else? We have a new bunch of sort of mixers, audio mixer and compositor for video. It's based on a new aggregator base class, and what it does is it handles live streams properly. So you can actually have, you know, defined latency for the mixer.
11:43
So basically, if one of your input streams drops out because someone put it as a network cable, you still want your pipeline to not jam up but continue running, right? So we made a new base class for that. The old base class didn't handle that really nicely. So we have made a new base class for that. That works quite nicely. We move that into core now, and we will hopefully move the
12:02
audio-specific one as well, and then hopefully the video one soon after. And we can start porting MUXes to the new base class. FLVMUX has already been ported, and the other ones as well, because at that point, you know, you will have a much nicer experience for making live pipelines that don't jam up.
12:22
Our OpenGL integration library and plugins has moved into base as well. It wasn't bad. So now we can build on that, and the API is stable now. So really, we're quite happy with that as well. Then WebRTC. WebRTC is very nice. If you don't know what it is, think of it as Skype in
12:42
your browser. How do I stream stuff to my browser? And, you know, the answer is always well. It's not so easy, right? I mean, you have, it depends what operating systems.
13:02
It depends what browsers, what browser versions you want to support, what, you know, codecs you can support, and it's a mess really. I mean, you might get away with sort of, you know, Dash and HLS in most cases, but it's, yeah, it's not so easy. But WebRTC, I mean, you know, of course it has
13:20
different advantages and disadvantages, because, you know, adapter streaming, HLS Dash is made to be scalable. But still, I mean, WebRTC, I think it's going to be big because it's going to work everywhere. It's going to work in most recent browsers sooner or later. So we basically have now, yesterday,
13:41
we merged the Gstreamer WebRTC plugin and library into GCP plugins bad. It just landed. It uses libnice for ICE stuff to get, you know, through firewalls and NAT, et cetera. And if you're interested in that, Nabeek has just written a blog post about it, and we also have a demo repository,
14:02
which might be a little bit rotten, but it should work. So that's really nice, because it will, I mean, it basically allows you to easily leverage WebRTC and stream to WebRTC clients using Gstreamer, anything Gstreamer.
14:21
And you can leverage all of Gstreamer. There were existing efforts. One was open WebRTC, sponsored by Ericsson. That's kind of dead now. And the reason that wasn't really continued is that the, well, let's say there was a mismatch from what we, as library developers, seed developers, need from an API. It was just, you know, easier to do a new one.
14:43
Corrento also has something about that, but it's more like media server focused. It's a very rich framework. I don't know if that is much developed. It might just be picking up again. And there were some proprietary solutions, but we're open source guys, so we want our stuff open source,
15:01
and we really want it in Gstreamer. Then you have libwebRTC from Google, of course. That's like the thing everyone uses, more or less. But I don't know if anyone has used it, but it's horrible. It's really horrible. I mean, it's so painful. I mean, it works, but it's very limited. If you want to do advanced stuff, you have to fork it. I mean, just building it seems to be a problem.
15:25
I don't know. Anyway, so, I mean, you know, webRTC bin, our new Gstreamer thing, it's very flexible. You've got full control. It more or less maps the existing APIs. So, I mean, you know, you have to learn something new in that respect.
15:41
The nice thing is you can leverage all the existing stuff in Gstreamer, hardware encoders, decoders, viewer copy capture rendering. You can make all that work. So it will work on embedded immediately. You can feed pre-encoded contents, and you'll have to fork libwebRTC and maintain it and update it. I mean, it's not fully, I mean, you know,
16:01
it's not super complete, but it's used in production, so it works for what works. It works well. But, you know, it doesn't do everything yet. So, renegotiation isn't fully supported. Receive-only streams don't work fully yet. But, you know, the internals map the spec really more or less,
16:21
so you can easily see where the gaps are and just fill them in. So, you know, if you want to help with that, help if wanted. We have performance optimizations more or less everywhere. There's so much stuff in the pipeline, but, you know, it doesn't really seem worth talking about in detail for the embedded parts, see Olivier's talk.
16:40
SRT, Secure Reliable Transport, it's a new thing, very much hyped and marketed, but it's also very nice, so there seems to be much support for it industry-wide. So, it seems well-placed to replace RTMP, and we've just merged source and sync plug-ins for that, so you can stream if you like.
17:00
MASON, our current build system has all the tools, but we're moving to MASON. MASON is basically, well, it takes the best parts of CMake and then, you know, improves up on that. We didn't like CMake, so. It's got a very nice, maintainable description language. It's not Turing-complete. I mean, it's, you know, it still has a few things missing,
17:23
but for us the main motivation was also that we have a Microsoft Visual Studio build, which works. Yeah, but there's still some work to be done. We're going to switch to MASON fully and drop all the tools, but it needs to be ready. We need to make it work everywhere.
17:42
Rust. Rust is awesome. It's a new programming language, originally from Mozilla. It's basically C++ we always wanted. The nice thing is it matches, but, you know, it's safe. It's basically a system programming language, but it's safe, it's productive, and it's more or less as fast as C and C++.
18:01
Zero-cost abstractions. It's awesome. We're not going to port this too much Rust any time soon, but, you know, we're playing with it. We're looking at it. It matches our memory model with ownership, et cetera, really.
18:26
They're in excellent shape. They can be used in production. Sebastian has a talk tomorrow at 11. You should check it out. Our Gstuma C-sharp bindings have also been rejuvenated, and they should be up to date now.
18:43
What can we improve upon? Well, the usual things, of course, but one thing that's sort of a pet peeve of mine is I want to, you know, the adapter streaming, the client side is really well supported, but the production side isn't as nicely supported. I mean, we have HLS sync, but, you know,
19:01
it's not really that nice to use. So, I mean, RTSP server has been a massive success for us because it's so easy to use and it's so powerful, and it would be really nice if we had something similar for Dash and HLS. Yeah, so, in general, writing simple servers should be easier,
19:21
just like a sync element, and the same for an HTTP server element, you know. People like making little pipelines and just running gslaunch and just, you know, serving their webcam, and, you know, they don't want to necessarily write some code. You know, sometimes they just want to use the library for a little use case and just, you know, make a pipeline, use a plugin, and that's all they need.
19:43
Yeah, that's all I have. Thank you very much, and thanks for the organizers, of course, and if you have any questions or comments. Yes, we can. So I can pass the mic if you have any questions. Don't be shy.
20:05
No questions. Excellent. One question here. Not really about what's new now, but any news of the SIP support in Jstreamer? Session initiation protocol?
20:23
Well, I mean, you can use it, I think, but, I mean, there are libraries. Oh, sorry. So there's a library called Firestream, which is a Jstreamer element that basically implements the media part that you need to support SIP that we developed almost a decade ago now,
20:42
and that has been used in production. It's used in pigeon. It's used in empathy. But it doesn't do the actual, like, SIP protocol part. You need a different library. There's a bunch of them. They're all terrible because SIP is terrible.
21:01
Any other question? There's one in the back. Oh, one in the back. I didn't see you. You're in the dark. Thanks for your talk. Last year I was looking in how I could query the dimensions of the screen
21:24
in Jstreamer to get them back in the pipeline, and I didn't manage, couldn't find it. What's the best way to get support, to find support on the web? Even get Gstreamer support? Yeah, like technical questions. Right. Well, which brings me to my last slide.
21:42
So the best, I mean, the best way actually is, if you find us on IRC, we are in the hashgstreamer channel on the free node network. That's the best way to just, you know, get questions answered quickly. Most developers will be there during, you know, European daytime, North American daytime.
22:02
But we have some Australian people as well, but those are the busiest times. So that's the easiest way. Also the Gstreamer develop mailing list, but it's fairly high volume and, yeah, I mean, people sort of answer when they have time, but skip it otherwise. So IRC is the best. In general, follow us on Twitter. We also, you know, I mean, it's not high traffic,
22:22
but blog posts, et cetera, that's where you find them. And we have a hackfest coming up in the spring and a conference at the end of the year in Edinburgh, probably. The date has been confirmed. Anyway, thank you very much and Olivier is up next with Gstreamer for Embedded.