We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Functional audio and video stream generation with Liquidsoap

00:00

Formal Metadata

Title
Functional audio and video stream generation with Liquidsoap
Title of Series
Number of Parts
490
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The talk will give a general overview of the Liquidsoap language, and put focus on recent new features: support for HLS, efficient video, etc. The task of generating multimedia streams such as for webradios or live youtube channels is a complicated task. One needs to face low-level issues (properly encoding and distributing the streams), mid-level issues (performing normalization, signal processing, color grading, etc.) and high-level issues such as generating the stream from a wide variety of sources (local files, other streams, live interventions, user requests, etc.) and properly combining them (performing transitions, adding commercials, vary the contents during the day, etc.). In this talk, we present Liquidsoap, a dedicated high-level functional language, which allows performing all these tasks in a modular way, with strong guarantees that the stream will not fail on the long run.
33
35
Thumbnail
23:38
52
Thumbnail
30:38
53
Thumbnail
16:18
65
71
Thumbnail
14:24
72
Thumbnail
18:02
75
Thumbnail
19:35
101
Thumbnail
12:59
106
123
Thumbnail
25:58
146
Thumbnail
47:36
157
Thumbnail
51:32
166
172
Thumbnail
22:49
182
Thumbnail
25:44
186
Thumbnail
40:18
190
195
225
Thumbnail
23:41
273
281
284
Thumbnail
09:08
285
289
Thumbnail
26:03
290
297
Thumbnail
19:29
328
Thumbnail
24:11
379
Thumbnail
20:10
385
Thumbnail
28:37
393
Thumbnail
09:10
430
438
Streaming mediaFormal languageMultiplication signSoftware engineeringProjective planeSoftware developerWeb 2.0MereologyComputer animationJSONXMLUMLLecture/Conference
Formal languageStreaming mediaFormal languageStreaming mediaHypermediaContent (media)Data managementDifferent (Kate Ryan album)Focus (optics)Sinc functionString (computer science)Source codeProjective planePresentation of a groupGoodness of fitConfiguration spaceComputer fileComputer animation
Streaming mediaFormal languageBuildingStreaming mediaConfiguration spaceComputer fileData streamPresentation of a groupComputer animation
Formal languageStreaming mediaComputer programmingFunction (mathematics)WeightServer (computing)PasswordComputer fileProgrammer (hardware)Programming languageMusical ensembleThomas BayesJSONXML
Source codeFormal languageStreaming mediaInternet service providerProgramming languageProgrammer (hardware)Streaming mediaCategory of beingExecution unitComputer fileSource codeFormal verificationLatent heatRow (database)Content (media)Variable (mathematics)Point (geometry)Scripting languageHypermediaSoftware bugSingle-precision floating-point formatFunction (mathematics)Information securityComputer programmingSocial classFormal languageStress (mechanics)Error messageProgrammschleifeLecture/ConferenceJSONXML
Source codeVariable (mathematics)Streaming mediaFormal languagePredicate (grammar)Default (computer science)Error messageVariable (mathematics)Formal languageProgrammer (hardware)Multiplication signCASE <Informatik>Scheduling (computing)Predicate (grammar)Different (Kate Ryan album)Source codeProjective planeInstance (computer science)Latent heatPrisoner's dilemmaWave packetLecture/ConferenceJSONXML
Function (mathematics)Formal languageWindowStudent's t-testProjective planeSoftwareMereologyStreaming mediaMusical ensembleComputer scienceFormal languageSocial class
Student's t-testNormal (geometry)Computer networkStreaming mediaLocal ringSubject indexingFunction (mathematics)Formal languageComputer scienceProjective planeReal numberImplementationMedical imagingOnline helpXMLComputer animation
Student's t-testNormal (geometry)Streaming mediaLocal ringComputer networkFunction (mathematics)Subject indexingFormal languageFormal languageFacebookRun time (program lifecycle phase)Standard deviationMathematical optimizationProof theoryBitFunctional (mathematics)Library (computing)MereologyScripting languageVariable (mathematics)Functional programmingLevel (video gaming)Multiplication signParameter (computer programming)Lecture/Conference
Formal languageType theoryFunctional (mathematics)Parameter (computer programming)Event horizonoutputInstance (computer science)WebsiteSystem callStreaming mediaConnected spaceType theoryObservational studyComputer hardwareVariable (mathematics)INTEGRALComputer programmingForcing (mathematics)Physical systemLecture/ConferenceComputer animation
Formal languageFunctional (mathematics)Streaming mediaParameter (computer programming)Operator (mathematics)Functional (mathematics)Arithmetic meanInternet service providerSource codeDefault (computer science)Computer programmingVariable (mathematics)Latent heatMIDIType theoryFunction (mathematics)QuicksortFormal languageStreaming mediaHypermediaCharge carrierInstance (computer science)Parameter (computer programming)Cellular automatonMathematicsUniverse (mathematics)JSONXML
Formal languageCommunications protocolPresentation of a groupSource codeoutputType theoryLecture/ConferenceComputer animation
Default (computer science)Local ringSource codeString (computer science)Message passingExecution unitBootingoutputData typeIntelStreaming mediaAddress spaceFormal languageQuicksortParameter (computer programming)Source codeUsabilityRight anglePhysical lawCodierung <Programmierung>Streaming mediaSet (mathematics)CodecLecture/Conference
Keyboard shortcutTerm (mathematics)CodecoutputStreaming mediaFunction (mathematics)Type theoryDecision tree learningProcess (computing)Ferry CorstenLogic gateLecture/ConferenceComputer animation
Streaming mediaFunction (mathematics)Streaming mediaBoom (sailing)SoftwareComputer fileoutputComputer hardwareFunction (mathematics)Sound cardCasting (performing arts)Instance (computer science)Natural numberDisk read-and-write headStress (mechanics)PlastikkarteClient (computing)WeightPoint (geometry)
Client (computing)Physical systemPasswordPoint (geometry)AuthenticationLoop (music)Streaming mediaSystem callLecture/Conference
outputStreaming mediaFunction (mathematics)Tape driveComputer fileLiquidBoom (sailing)Group actionSound effectoutputStreaming mediaVideo gameFront and back endsYouTubeFunctional (mathematics)Level (video gaming)Computer programmingComputer animationXML
Streaming mediaFunction (mathematics)outputContent (media)Sound effectFilter <Stochastik>Web 2.0Computer fileComputer hardwarePlug-in (computing)DigitizingInteractive televisionWorkstation <Musikinstrument>DivisorInternetradiooutputLecture/Conference
VolumeVolume (thermodynamics)Computer fileGroup actionMaxima and minimaData compressionLatent heatFitness functionComputer animation
Content (media)Data compressionVolumeContinuous trackFunction (mathematics)File formatDifferent (Kate Ryan album)MultiplicationConfiguration spaceServer (computing)Group actionClient (computing)Digital photographyStreaming mediaJSONXML
Configuration spaceInteractive televisionTelnetServer (computing)Point (geometry)Lecture/Conference
Level (video gaming)TelnetComputer fileScripting languageVolume (thermodynamics)Operator (mathematics)Point (geometry)Physical systemPrice indexTouch typingBoom (sailing)Different (Kate Ryan album)MetadataStreaming mediaNetwork socketElectronic mailing listQueue (abstract data type)Library (computing)Observational studyFrequencyMultiplication signSource codeNetwork topologyForceInheritance (object-oriented programming)JSONXML
Source codeContinuous trackMaß <Mathematik>Streaming mediaSensitivity analysisSource codeComputer fileParameter (computer programming)MetrePoint (geometry)Arithmetic meanMultiplication signLevel (video gaming)Uniqueness quantificationLie groupLecture/Conference
Form (programming)Function (mathematics)VoltmeterFile formatLocal ringInternet forumRight angleFunction (mathematics)Computer fileView (database)Streaming mediaFile formatParameter (computer programming)Group actionProgramming languageElectronic mailing listCodierung <Programmierung>Different (Kate Ryan album)Data compressionFormal languageMultiplicationForm (programming)Speech synthesisXMLLecture/Conference
Function (mathematics)File formatComputer fileControl flowComputer networkConsistencyReal numberMultiplication signFunctional (mathematics)Software developerLine (geometry)CodeInformationFunction (mathematics)Functional programmingGame controllerGroup actionBitElectronic mailing listPlastikkarteDecision theoryStreaming mediaConfiguration spaceJSONXML
Communications protocolCodierung <Programmierung>outputINTEGRALString (computer science)SoftwareStreaming mediaTraffic reportingExtension (kinesiology)Formal languageFilter <Stochastik>Film editingLecture/Conference
File formatStreaming mediaCodierung <Programmierung>Extension (kinesiology)Different (Kate Ryan album)Natural numberSoftware developerContent (media)outputNeuroinformatikState of matterGroup actionComputer animation
DisintegrationContent (media)File formatoutputFunction (mathematics)Software developerMultiplication signSoftwareLibrary (computing)JSONXMLComputer animationLecture/Conference
Library (computing)Supersonic speedContent (media)String (computer science)Endliche ModelltheorieLecture/Conference
Event horizonSoftwareOpen setPoint cloudOpen sourceFacebookComputer animation
Transcript: English(auto-generated)
And our next talk will present us LiquidSoup. David, come on up. Good afternoon. I'm really pleased to be here. It's a pleasure to be participating in this conference.
I'm a software engineer myself. I work in web development, mostly, but I work in this project for 10 to 15 years, I guess, which makes it a long time. We've got Samuel that's here. That's part of the project as well. And David, who couldn't come. So yeah, I'm going to talk about LiquidSoup, which is an audio and video streaming language.
So what is LiquidSoup? Start with that. It's a language to create audio and video streams. So what I mean by video streams is not necessarily the media content, though we do that, but it's also everything that goes above that, which is the management of that content. Creating tracks, adding metadata,
switching between different sources, encoding, sending to different destinations. Primarily, our first focus was IceCast and radio streaming. That's the historical background of the project, but we've been, since then, exploring a lot of different avenues.
And it's important to remember it's a language, and so the big difference and the big advantage of the tool is that you have a flexibility that goes much beyond what configuration file would give you in a very declarative way. And as we're going to see at the end of the presentation, it's a very important feature
because a simple thing to realize is that people want to build radios online, and they're like, oh, that's pretty easy. Just send some data, stream to an IceCast server, and you'll be done. And then you realize when you stop wanting to add feature and have something that's nice, it's complicated. So here's a quick example. I'm going to give much more details about it
at the end of the presentation, but that gives you right away what I'm talking about is a playlist of files, which is your music, a playlist of jingles that you want to play on the radio every five songs here, and you want to output that to IceCast. So you start with your radio as your playlist, then you add jingles with weights.
So every four playlist files, you're going to have one jingle. You encode that in MP3, and you send that to an IceCast server. I should change the password on that. So it's programming tools to help the user, and remember that a thing that's important for us as a programming tool is that most of our user base is not programmer's background.
They are radio users, and so they want to be helped. They need support. They need to understand what is a programming language that most of them have never done it, and we will provide tools for that. And one of the first tools we do is verification of specific properties on a stream. One of them that's very important is can a stream fail?
Can you want a stream, and then suddenly you get nothing to stream, and you're like, wow, snap, I get to start my output. So here, a typical example here that I put is that if you look at those playlists of files and jingles, the playlist may be empty. One of the files may not be able to be decoded. Maybe all the files cannot be decoded.
And so at the end of the row on that ice case, you might reach a point where you're like, whoops, don't know what to play. So what we're going to do here is we're going to fail on that script and say that source is fail-able. Most of the user will return that as a bug or it's going to get them annoyed. But in reality, it helps them a lot
because what they're going to have to do next is this. We're going to add a security. Single is a source that has one file instead of a playlist. And it repeats over that file. And because the path to that file is static, we're going to try to decode it when we start the program. We're going to realize we can decode that.
We'll make one assumption, a reasonable one, which is that the user is not going to delete the file. But that's all. And then from there, we're going to be sure that that stream that we've built cannot fail, unfellable. Another thing that I'll show later is that we have static typing that's scattered for our users. It carries the source media content.
It can tell you if you have unused variables. Most languages would do a warning if you have unused variables. We have chosen to make it an error because most of the users are not programmers. And so they're not going to realize that they might be doing something wrong if they have a variable that's dangling and not being used. Another typical example we have is time predicates. Most of the things you want to do with the radio comes with scheduling.
You want to make sure that you have a certain place that goes on Monday at noon, something else that goes on the weekends. We have time predicates in the language so that it's very easy to create a switch here between different time and day. For instance, 8 p.m. to 10 p.m., you have prime time.
Or on Monday, you have specific source. That's the kind of thing that we do to make a language that's dedicated to specific users and specific users. There's a lot of them, which I won't have time to present, unfortunately. The project was quick history. It was founded in 2003 by David and Samuel. David was responsible mostly for the language part,
typing and everything, and the streaming. Samuel was more the guy that was doing all the experiment with the weird stuff. I'll talk about it more. Savonade was the original name. It was very nerdy because he was a student project at the UNS of Lyon. The purpose back then was to share the music they had on the Windows network
and be able to stream while coding and scan for different tracks, have user requests and everything. That's the feature I'm talking about. They created a new language, which is the part that's really original in that tool compared to other tools that exist around in the landscape. Part of the reason was that it was a school project.
Schools pushed them to actually inject in their projects some things that were back then pretty theoretical in computer science classes. I think one thing that's been great in the 10, 15 years that we've had at projects is to see a lot of these tools that back then were kind of niche and very academic being used in real applications and help people.
Another aspect of that is OCaml, which was the implementation of languages. That question is interesting because when we started presenting liquid soap 10 years ago, 15 years ago, it was one of the biggest questions. Why do you use this tool? This is the purely academic language.
It's interesting to see that now that question is easier to answer because OCaml has been used widely by Facebook, by other industry-wide standards. So, I guess, another proof of success of academic ideas. Let's look at the language a little bit closer for a minute.
It's a scripting language. That means that it's not compiled. It's executed at runtime. There's no optimization. We don't really need optimization because everything that's like encoding and decoding, we don't do that. We delegate that to other libraries. We just do, as I was saying, the higher-level manipulation of this data. It's a functional language, so that was the part that was the academic thing.
There's a lot of ideas and definitions for functional language, such as the fact that there's an assignment and you cannot reassign a variable. But what it really means for us, the most important part, is callbacks. Functions can be parameters. You can pass an argument that's itself a function. And it's very useful because when you set up a stream,
you're going to have a lot of events that come through. So, for instance, here, it's an input that someone can connect to. I'll explain what the hardware is for us. And you want to know when someone connects to maybe update your website to do things in the background. So you can pass a callback here and implement whatever the user wants to do when someone connects.
And that's a very useful tool for us. Study an inferred type. That's another one from the academics. That's the OCaml thing, particularly. What it means is that we're going to provide types for all variables, an integer, a boolean, that's going to help make sure that the program does the right thing.
When you do JavaScript, sometimes you receive a variable. It's not the wrong type. You call the wrong methods. Boom, you crash. So we will make that impossible, but they're inferred in the sense that the user never has to type them. We guess them, and we guess them right. And because we're a specific language for stream, what we're going to have, we're going to have types that are specific to stream.
So here, for instance, the type of a source that has stereo, audio, no video, and no MIDI. And how do we infer that type? Well, we infer it from the outside down. So when we have an output, you remember we saw the ice cast with an MP3 output? This MP3 output, if you don't change the parameters,
is assumed to be stereo. And so that quote i is a universal variable that says that output is going to give us a type, here, audio 2, video 0, MIDI 0. And then we're going to, from there, we're going to go down the source tree by saying everything that connects to that output needs to have stereo audio and nothing else. And then we're going to check at every operator
that we have the right type. So the user doesn't have to do it, but we do it. And if the user is making a mistake, for instance, trying to put a mono signal into a stereo output, we're going to detect that and tell them, hey, you should really convert to stereo. The last thing is, I'm not going to dwell too much, but it's about programming languages by itself.
So we have optional and labeled arguments and functions. Pretty useful. The reason is that if you have labeled arguments, they are more easy to understand what they do. Optional means that you can also provide default values. We'll see that a lot of our operators have default predefined values so that the user doesn't have to do it. But if you want to do it, you can go deep, far and deep,
changing all sorts of things. And the last thing that's important with that language is that it's self-documented. So you can actually be like, hey, what can I do with an SRT input, for instance? And we saw the SRT at the protocol. We just added it. We'll see with the next presentation why. So if I'm a user of liquidsoap
and I don't know what to do with an SRT input, I can do liquidsoap-h input.srt and voila. I have the type that tells me exactly what it receives. So see, here is the source that you're going to create and then all sorts of things that describe what I can do with it and then all the arguments are also documented.
All right, so some common features in liquidsoap that we can do. So it's not just a language. It's also what can you do with it, input, output, decode, encode.
So first of all, we support a large set of audio and video codecs. The reason for that is that we delegate that to libraries. Historically, we used to implement ourselves all the bindings for MP3, OGG, Theora, everything. It's a lot of work. Lately for the next three years, we're going to have a vast improvement on that
because we are now able to support pretty much whatever FFmpeg can support in terms of input and output, not just codec, but I'll get back to that. So that means that all the fancy video codecs that we just presented earlier, we will support them. And we will support them in a way that to us, it's opaque.
We just delegate that work to whoever does it best, which is not us. We have a lot of input, output. So based on those codecs type of stream media data that we can receive, we can input from a sound card. So it's Alza, port audio, AO for the output. We got a lot of them. So that means that in a radio situation,
you can input audio from the studio. Some DJ wants to connect, boom. You can use the sound card basically. We have file output and input. I don't know why I just put output. So we can stream playlists. All that is pretty standard. That's for the output and input.
So we can receive an HTTP stream. We can go pull an HTTP stream. We can send to an icecast. Very recently, we've added HLS, for instance, that we just talked about also as well. We can do input and output in HLS and SRT, as I say. So those are kind of the network layers that we support. We have the hardware I talked to you guys about earlier,
which is basically when you run a liquid substream, a lot of our users are like, hey, I want to switch from a playlist to a live stream that someone is going to stream from an icecast client. And originally, the first way we did it was to pull on an icecast mount point and then loop it back in the system.
It was pretty hacky. So one of our early contributors came back with this idea of a hardball, which is basically we provide the receiving endpoint for an icecast client. So the system runs, opens a port, has a password set up, some kind of authentication with a callback, because we have callbacks. And then the user comes in, the DJ, connects straight into liquid sub,
liquid sub receives the stream, decodes it, and boom, can switch around seamlessly with a transition, with a lot of nice effects, maybe put a jingle, and go from files to a live show. And yes, as I was saying, on top of that now, we're going to support not just codecs, but everything as input and output that FFmpeg and Gstreamer have.
I would suggest FFmpeg more, because Gstreamer is more difficult for us to integrate as a backend. But that means that we can send RTMP stream to YouTube. That's a new thing that people have been doing, creating videos with the tool and then sending that to YouTube as a live video. It's pretty amazing to see.
15 years after starting it, I have to admit. Yeah, all right, it's here. Functional cross-fading, I'll go back to that. That means that you can actually code the kind of transitions you want between your tracks, depending on some data that we provide, like a loudness level, typically. We could do blank detection.
That's another feature that people like a lot, where, hey, I want to know programmatically when my stream is failing. Maybe someone at the studio started this thing but is not talking on the microphone. Maybe that happens a lot. We used to have all these CDs that had hidden tracks after 20 minutes. Well, you encode it, you put it in a playlist, and then you're like, whoops, just did 20 minutes of blank.
We can detect that. We can skip. We can send an alert, a lot of things like that. And that's Samuel here. I was telling you he was doing all the weird stuff. He also did all the LatSpa plugins. So every plugin that exists in the landscape of LatSpa plugin, we can support that for audio effects so that, again, we don't redo things that other people do best.
One of the last features that I'm working on is the FFmpeg filters themselves. So I'm pretty excited. It should land soon, where we can also reuse the filters that is in the FFmpeg API. So that's some features. And I want now to combine them and show you guys an example of how you can build a radio with Liquid soap.
Let's say that I want to build a web radio, and I want automated switch from playlist to live content. So I want to have files that are played, and then the DJ comes in, boom, connects to the hardware input, and I switch to that seamlessly. Pretty nice. I want to have user interaction, so I want users, maybe mostly people from the station,
to be able to also push on-demand requests during the file's playback, so that I have a playlist, but if someone really wants to push a song, I can connect, send a comment, and boom, cue that file for playing. I want normalized audio, because I want to make sure that the volume between the tracks that I have is consistent,
and it's a good experience for the listener. I want compression, because that's also what people like. I know on FM radio, with the loudness pushed to the maximum and it sounds great in your car, I want to do it. I want crossfade transitions. I want to make sure that when it goes from one track to the next one, I have this nice volume that overlaps, but also in a way that's specifically
catered to different tracks. Sometimes you have tracks that already have a fadeout. Sometimes you have tracks that have a very sharp ending, and you want to know how you're going to combine that track with another track that slowly builds up, typically. I want jingle transitions. Maybe when I start a live show, I want to have a jingle. I want every five tracks to be a jingle of the radio. I want top of the hour jingle.
That's the kind of thing. And, once I've done and built all that, I want to send it. I want to send it to all the clients in the world. I want to send it in MP3, AAC. I want different quality for those. I want IceCast. I want HLS. Maybe I want YouTube as well. And that's my whole configuration. Yeah, multiple destinations.
And then someone comes along. Yeah, so it's already pretty tricky. That's a lot of things. Now we realize that you're not just generating a stream, sending it to an IceCast server. You've got a lot of things you'd like to do, and they're all pretty tricky. And on top of that, some people want to do video. Or, if you're Sam, you want to do MIDI,
which I think we can support. I don't know how, but you'll have to ask him. So, here's how you do that with Liquid Soap. Here's a configuration. That's the first half. We'll go on the second half later. Let's slow down for a minute. So, user interaction. I'm going to enable the telnet server. That's one of the easiest ways to interact with Liquid Soap.
We can also define HTTP endpoints. We can interact with the socket. We've got a lot of different entry points. I'm using this one because it's just simple to explain. In our provided library, we have a feature that, basically, when a track is processed and made ready
for the system to stream, we also compute the replay gain value, which gives us an indication of how much the dB level should be adjusted to normalize the volume. So, I'm not going to give you the details, but basically we know how to prepare and tag a track with the right replay gain value. I'm enabling that here. We'll see later how it plays out in the script.
And then that's what we saw earlier. I have a list of files, a list of jingles. I'm going to combine them, one to four. One jingle, four files. One jingle, four files. Once that is done, remember that we wanted the replay gain. We wanted to make sure that the volumes on the files is consistent.
We're going to use an Amplify operator. That operator starts with one. It doesn't touch the volume. But it's going to have an override, which is basically a metadata that is being sent along with the tracks that says replay gain minus two dB. And when that operator sees that metadata, it's going to readjust the value
of the current amplification to know that it needs to lower the volume for that track. Now we have built a list of a stream of files that have one jingle every five files and have normalized audio. The second feature we wanted was user request. I'm going to create a queue. I should have used EQ, actually,
which is the operator. EQ is an operator that has a telnet command embedded in it. So it's going to automatically create a telnet command with that ID. You can connect to it and say user request dot push file name boom is going to be queued in the request. And we combine them in a fallback. So the fallback is an operator
that's going to take the first available source. So most of the time, that source here is not going to be available because it doesn't have any request. So the file is going to be picked. But if you push a request here, that source becomes available. The fallback knows that and switch to that so the next track you're going to play is a user request.
And remember I was telling you guys that we're going to try to make it easy for the user. So we have this parameter here that says track sensitive. It means that I am not going to switch a source until I have reached the end of a track so that when I push a request, it doesn't cut the current track. It just waits for the end of the song, plays a user request.
Next, we're going to put a crossfade. It's so smart. If I have time, I'll explain what it does. But that's going to basically merge those tracks and make them nicely overlap. And we'll get to the last stage, which is I want that stream to be able to receive user live stream, someone that connects as a DJ.
That's going to be a hardboard here. It's going to be a source, I call live, that will be available when the DJ connects to the live mount point. Now let's combine that and make it an output. So first of all, I need to combine live and radio together. I'm using a fallback again, but check that out. I'm using a false here. And the reason is that when you start a live stream,
you're not going to wait around. You want the DJ to be on the air right away. So this is going to be less nice. If radio is still playing a file, when the DJ connects, it's going to switch right away. Do you have a possibility to add a crossfade transition to make it nicer? I'm not going to go there. I want to compress all the signal that comes out, everything, I want it to be compressed.
We have our own house-based compressor, but really, you want to use Latspa, maybe. You want to use a multiband. We're going to try to support that from people who know how to do it. And then, a list of format that I want to do. So that's one of the nice things with being a programming language. I can just do a list as a name and the parameters of each encoding language,
and now I can pass that systematically. So HLS takes all of them and encodes in different segments, if you're familiar with it. And then, for iSCAS, I'm going to use a function, because I'm a functional language, that takes the config and creates the output, and I'm going to iterate around that list.
And there we are. In very few lines of code, we have a very nice stream. I had a lot of things that I wanted to talk more with Smart Crossfade. I can't talk too much because I'm already running out, but that's how it works. You receive information that's basically loudness of each track, and you can make a programmatical decision about what kind of transition you want.
It's pretty nice. And there's a lot of things about latency control that would take a lot of time to explain again. You can come and talk to us if you want. It's very interesting. We had a lot of issues with that. And Radio France, they're going to talk maybe a little bit about it. I don't know. Yeah, so now I have just enough time to finish with the future developments,
and I think it's pretty exciting for us, because remember, we want to do what we do good, and we want to delegate everything else to people who do it better than us. We're a language. We are a very expressive way of describing manipulation of streams, but we don't want to do the encoding or the decoding or anything.
So we want a tighter integration with FFmpeg. I want basically to support all input and output support, all the network protocols they support, because we don't want to do that. And frankly, the API for FFmpeg is great. I love working with it, so I'm going to push as much as I can with that.
Yeah, extensive support for input and output encoding formats, filters, more support for video, because historically we couldn't keep track with all the different encodings and decodings that were being put. Now we're going to be able to do it. And even more, something that we just talked about yesterday with Samuel, it's been a huge feature for our users for years, because of the nature of cross-fading, of a lot of transitions,
we need to decode content. But we're starting to think that we might also be able to support encoded content from end to end, as long as you don't do anything that requires us to do computation on the content, like a cross-fade. So those are the future developments. I think we should have that within four to six months out. We're working on more documentation and all that at the same time.
So that's it. Thank you very much for your time, and maybe you have questions or anything. I've got nothing on the book for now.
I would love to look at it. It's always the same answer. It depends if there's a good library that can interface nicely with us. We're always happy to add more features to the software.
Yeah, that's a really good question. For now, the streaming models, you start a code, you run it, and if you want to change it, you need to shut down the system and restart it. It's also been a big high demand feature.
It's complicated, but maybe we'll get to that. If we get to encoded content, who knows? We can do a lot of things we used to say we couldn't do for years. Say your name again?
Sonic P. I'll check it out. Thank you very much.
Thank you for having us.