OpenSource Miracast
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 199 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/32594 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2014137 / 199
2
3
10
11
12
13
14
17
18
19
23
24
25
27
29
45
46
47
48
49
50
51
52
55
56
57
58
65
67
68
70
72
74
75
76
77
78
79
81
82
84
86
87
89
90
93
94
100
101
102
103
104
106
107
109
110
111
112
113
114
115
119
122
123
124
126
127
128
129
137
139
141
143
145
147
150
154
155
158
161
163
164
165
166
168
171
174
175
176
179
182
183
185
188
190
191
192
194
195
196
00:00
PlastikkartePoint (geometry)Ferry CorstenQuicksortMultiplication signMedical imagingLecture/Conference
00:40
Source codeMedical imagingDirected graphInferenceHypermediaPeer-to-peerSynchronizationUser interfaceTouchscreenComputer fileWordComputer animation
01:36
Latent heatLine (geometry)Computer programmingMetropolitan area networkSummierbarkeitRevision controlElectronic visual displayPublic key certificateComputer programmingPeer-to-peer8 (number)Dynamical systemComputer animation
02:33
DatabaseLatent heatComputer programmingElectronic visual displayFrame problemData transmissionMultiplication signPeer-to-peerFrequencyPresentation of a groupView (database)Element (mathematics)Structured programmingConnected spaceRenormalizationFilm editingFreezingLink (knot theory)SoftwareComputer architectureINTEGRALPlastikkarteCommunications protocolLecture/ConferenceComputer animation
04:44
Frame problemStandard deviationPeer-to-peerGroup actionImage resolutionVirtual machineInformationStreaming mediaConnected spaceMoment (mathematics)Level (video gaming)VariancePairwise comparisonLecture/Conference
05:55
Radio-frequency identificationGroup theoryGroup actionFrame problemImplementationLink (knot theory)Peer-to-peerRule of inferencePoint (geometry)SoftwareClient (computing)Multiplication signInformationDevice driverElement (mathematics)Message passingServer (computing)Asynchronous Transfer ModePopulation densityMedical imagingHazard (2005 film)Translation (relic)Moment (mathematics)Student's t-testRight anglePlanningArithmetic meanNatural numberProof theoryBridging (networking)Computer animationLecture/Conference
09:39
1 (number)Software testingFirmwareQuicksortProcess (computing)Interface (computing)Similarity (geometry)Source codeConnected spaceImplementationAsynchronous Transfer ModeRemote procedure callSynchronizationRTSPCodeMessage passingConnectivity (graph theory)Streaming mediaBefehlsprozessorServer (computing)Kernel (computing)Computer hardwareGraphics processing unitData transmissionCodierung <Programmierung>MereologyAddress spaceSoftware developerPCI ExpressInternetworkingPlastikkarteOpen sourceDevice driverSet (mathematics)Electronic visual displayGame controllerLatent heatSoftwareRepository (publishing)Random graphWordDependent and independent variablesTerm (mathematics)DivisorComputer programmingString (computer science)Multiplication signNetwork topologyGoodness of fitDataflowVertex (graph theory)QuantumTheoryException handlingDirected graphSound effectCondition numberStatisticsCASE <Informatik>Online helpWeb syndicationTransportation theory (mathematics)Profil (magazine)PlanningWeb pageDigital electronicsComputer animationLecture/Conference
16:23
Medical imagingSocial classHeegaard splittingSynchronizationStreaming mediaAndroid (robot)Demo (music)Client (computing)TransmitteroutputDigital rights managementComputer configurationTouchscreenSource codeConnectivity (graph theory)Semantics (computer science)Computer animationLecture/Conference
17:47
Data transmissionRight angleLatent heatTouchscreenWordEuler anglesSet (mathematics)String (computer science)Data streamImage resolutionMultiplication signSinguläres IntegralGroup actionArithmetic meanContent (media)RotationSubsetNormal (geometry)Codierung <Programmierung>Electronic mailing listLimit (category theory)Asynchronous Transfer ModeLevel (video gaming)Sampling (statistics)Transportation theory (mathematics)Scripting languageSound effectMoment (mathematics)Source codeElectronic visual displayComputer hardwareGroup theoryConnected spaceGame theoryAddress spaceoutputMenu (computing)Android (robot)PasswordWindowGraph (mathematics)ThumbnailSynchronizationLecture/Conference
24:48
Modal logicIntegral elementStreaming mediaCloningComputer fileSource codeBitVolumenvisualisierungDemo (music)Computer programmingAnglePlastikkarteSingle-precision floating-point formatoutputServer (computing)System programmingDistribution (mathematics)Electronic visual displayLatent heatVideo game consoleMereologyKeyboard shortcutElectronic mailing listInterface (computing)SpacetimeCodeReading (process)SynchronizationExclusive orDevice driverImplementationComputer programmingSoftwareInternetworkingNormal (geometry)Ferry CorstenHoaxKernel (computing)Process (computing)Decision tree learningGraphics processing unitComputer hardwareContent (media)Point (geometry)Student's t-testMultiplication signRepository (publishing)NumberTrailCommitment schemeCartesian coordinate systemConsistencyLevel (video gaming)Video gameTranslation (relic)Structured programmingLimit (category theory)Figurate numberGame controllerInsertion lossOperator (mathematics)Presentation of a groupMathematical analysisMetropolitan area networkRandomizationComputer animationLecture/Conference
32:55
Communications protocolProduct (business)Buffer solutionSource codeSoftwareModal logicLatent heatEmailElectronic visual displayScaling (geometry)Frame problemMereologyComputer hardwareData conversionVirtual memoryBefehlsprozessorGroup actionDigital rights managementColor spaceSpacetimeSystem callSimilarity (geometry)ImplementationProof theoryDevice driverTopological vector spaceConnected spaceMedical imagingSemiconductor memorySampling (statistics)Cellular automatonSquare numberWordNatural numberMachine visionTheoryRule of inferenceSummierbarkeitArithmetic progressionSound effectCodierung <Programmierung>Graph (mathematics)Mathematical analysisGaussian eliminationProgrammer (hardware)Food energySoftware frameworkNeuroinformatikReal numberCodeLecture/Conference
38:01
Streaming mediaKey (cryptography)Software testingGastropod shellScripting languageImplementationLink (knot theory)Latent heatTrailEncryptionDemo (music)Multiplication signCommunications protocolFile formatSource codeBitPlanningParameter (computer programming)Computer fileEmailRepository (publishing)Planar graphArithmetic meanGraph (mathematics)Basis <Mathematik>Perspective (visual)Normal (geometry)Electronic visual displayClosed setMedical imagingMassNumeral (linguistics)Different (Kate Ryan album)Context awarenessQuicksortMusical ensembleGoodness of fitMachine visionString (computer science)PixelLecture/Conference
43:08
ArmSource codeFigurate numberQuicksortTheoryTouchscreenCodierung <Programmierung>Well-formed formulaArithmetic meanVirtual machineComputer hardwarePlastikkarteLecture/Conference
44:30
TouchscreenArithmetic meanShared memorySoftwareLimit (category theory)Video gameLecture/Conference
Transcript: English(auto-generated)
00:16
Okay, now I need to set up the VGA port and my Xover crashed all the time, so I have now another card.
00:27
Seems through, yeah. I'm going to talk about MillerCast on Linux and the last half year to implement it and to make it possible.
00:42
MillerCast is basically HDMI over IP and that over Wi-Fi. It's not like DLNA and other media streaming things which are really centered around files or which are seeking and allow you horrible UIs where you select the file and do stuff.
01:05
MillerCast actually is just to get your screen from one device to another one. It has a source and a sync source, obviously. It's a device that creates an image, whatever it does to do that.
01:20
And a sync is a device that displays the image, usually a TV or a monitor. The connection, the real thing about this is actually that it uses Wi-Fi peer-to-peer. I want to first explain what all these words mean, so I ran out of them. The 8.11 specs are the official Wi-Fi specs that define how Wi-Fi works.
01:47
And then there's the Wi-Fi alliance which creates some common practice specifications which define things on top of these Wi-Fi specs. There's one spec that's called Wi-Fi peer-to-peer.
02:03
I think that's six years old now. I'm not sure. Wi-Fi dynamic is the certification program name of the Wi-Fi alliance. It's basically the same. In 2012 they released the Wi-Fi display specification which defines MillerCast.
02:22
MillerCast is again just a fancy name. It's the certification program of the Wi-Fi alliance. I was trying to use peer-to-peer and Wi-Fi display, so I hope you don't get confused by that.
02:40
The first program I had is that all these specifications from the Wi-Fi alliance are not free. You have to pay for them, but apparently you can just Google for them and you can find them. You get how MillerCast and Wi-Fi display work. The architecture is you first have two devices which have to discover each other
03:04
and you have to set up the peer-to-peer transport. On top of that you create an IP link. You use your IP network from one peer to the other one and then you just use your TCP UDP to get your audio-video transmission. It's quite simple. We're trying to reuse as many known protocols as possible.
03:26
I want to go through each of these steps and just explain all the work. I'll try to implement them. For transport we have three things. We have device integration and discovery.
03:41
We have negotiation and connection establishment. And we have the IP link set up. Sounds straightforward. Apparently it's not. It took me four months to get this working. It was this Monday that I got the first Wi-Fi device working.
04:11
So with the evaluation, the Wi-Fi peer-to-peer standard defines how that works. You simply have two devices with a Wi-Fi card and they send beacons so they can find each other
04:23
and you use the normal Wi-Fi scan to find other devices. The problem now is you need to know whether it's peer-to-peer capable and obviously whether it's MillerCast capable. What the peer-to-peer uses and what Wi-Fi display uses is
04:40
informational elements which you can put into different frames in the Wi-Fi aspects and Wi-Fi action frames and beacon frames and so on which define whether you support MillerCast, whether you don't, what resolutions you support and so on.
05:02
And that's why you can filter devices. You can directly see whether you've found one, whether it's a MillerCast device or not, which is quite handy and you need that. On the other hand, you find a whole lot of information that you can put in there. As I said, supported audio modes, supported video modes,
05:22
supported resolutions and so on. So it's good and huge. You have to code a lot together and working. I don't see why, because you do the exact same thing once you have a TCP connection. That actually works quite nice on nearly all Linux machines
05:43
which can do peer-to-peer. The problem is negotiation. The peer-to-peer standard uses action frames for two devices. This one, the device connects to another one, the peer-to-peer standard defines action frames to do a group operator negotiation.
06:01
That means one of two devices needs to create a soft AP and the other one connects to this AP, but you don't know which device does this. So my device does this three-way handshake, which also works quite nice, and then the one side creates an AP and you connect to it.
06:21
Again, you mark the AP with information elements so all of your clients will not try to connect to it. This was not actualized peer-to-peer and TDLS, but peer-to-peer is mandatory and I haven't seen anyone using TDLS or supporting it.
06:43
TDLS means Tunnel Direct Link Setup. There are two devices which are connected to the same access point and they can create a direct link by routing their messages to set up this link through this access point.
07:00
On top of that, once you have this usual AP setup, you use WPS and WPA2 like on every other network. Once you get your welfare connection, you can create an RP link.
07:24
We have just two devices, so what you could do is just press right and left. Anyway, what you can do with just IPv6,
07:42
you never discover just two devices, so you know who the other guy is, and it's not where you just can send commands. The time rule is too hard, so we don't do that. The time rule is with a network with two clients
08:00
and you use DHCP and IPv4, especially if you know the DHCP server implementations, which we have as open source, are horrible to use in an ad hoc mode where you just want one link, you want to spawn it, pass it,
08:23
and a prefix just doesn't work, so I had to actually get the command implementation out there and rewrite it to do it just shortly. As long as IPv4, obviously, is quite not our namespace,
08:42
and they normally use 192.168, and if you're unlucky, two networks you're connected to just use the same address. You have no chance to avoid that. Apparently, IPv6 is quite a new technology,
09:01
which is not right to be used. One of the myths is that it supports stereocopic 3D, which is like probably 10 years older than IPv6, but not IPv6. I don't know.
09:20
They just screwed that up. Anyway, that's fine. So anyway, that's up, and p2p should just work. Obviously, we still have a problem because it's not that easy. It depends on the drivers on Linux, and then we need to implement p2p.
09:43
Now that's just, can I support it because the firmware doesn't allow these interfaces, and these are the old ones and the port-come one, and I have a lot of devices at hand, tested these all, and then I tried first looking what chipset a device has
10:02
and then applying it and testing it, and then I crept through the kernel sources and tried to figure out which driver supported it. And quite a few claimed to support it, so I went ahead and bought these devices. First, I started with USB devices, and it didn't work.
10:22
It just failed. Then I bought an internet card because I thought internet works quite a lot on that, but it didn't work either. I brought 200 messages to the Wi-Fi development list, and they probably didn't care.
10:42
After ping three times, I wanted to track down a bug, and so that won't set up working. This is an autocad with a 7060 card, which is a PCIe card,
11:01
so you cannot easily plug it. I only need the supplicant version, which I posted there, because everything newer is just broken, and I tell them I never got a response. Awesome.
11:21
So if anyone of you thinks they can go home and use it, you can't. Sorry. This was transport setup, and it works. It's stupid, but what can we do? With that, we now have the other part
11:41
of the Wi-Fi address specification, which defines, once you have an IP network, how to transport your data. And what it does is basically RTSP, RGP plus AMPEC4 plus auto, and puts it into an AMPEC2 transport stream, which is funny. If you work with that,
12:01
it works on Linux. After four months on Monday, the connection setup working, it took me two hours to use tstreamer to get the other stuff working. Did you publish the pipe for tstreamer? To get it working?
12:22
I can just... No, I haven't written it anywhere, but it's in the Git repository. I can just look at it. I don't know what we want to work, so I went out and just decided that this Wi-Fi specification is quite nice
12:41
and has a feature I always wanted, but the P2P stuff is just not working, so I thought, why not just make HTML and Wi-Fi display, but without Wi-Fi, just require an IP network. So you can use implementation and work just over Ethernet, over usual Wi-Fi, whatever you want.
13:05
As people know how RTSP works, RTSP is the main part of this control channel that is used for Wi-Fi display. RTSP is usually used if you have a streaming server and you want access to that server
13:20
and want to request the stream and get the data. The problem with that is with the display, usually the source side, the side which produces the data, is the one which initiates the connection. Once you do that, you have the IP set up, you have to wait for the sync to actually connect to the source and say, I want your data,
13:41
so it's actually reversed and you pull the data to the sync. This is so ugly that they implemented the RTSP messages that don't allow the source to start streaming if the sync doesn't want the data yet. So they added messages
14:00
which allow the source to say trigger play and they send it to the source, to the sync, and the sync says, oh, I'm supposed to be playing now and it starts playing. They use RTSP just because it's main, but RTSP is pretty easy,
14:29
it's HTTP basically, just with different commands, and you can just hard code that and it works, and that's fine. Once you've got the sync working, you do no negotiation,
14:42
the sync says which modes are supported, the source selects one of these modes, and you can select the MP4 profile, you can select other modes. Once you have that, you start streaming, and then you use RTPS used on UDP, and that's just
15:01
a stream of data from the source to the sync, and the RTSP stream is not used anymore. You can request remote renegotiation or other stuff, but I haven't seen anyone who supports that.
15:21
To think of data you can have MP4s used as that for audio, which you can also optionally transmit their AC3, AAC, and some other formats, anything you want basically. MP4 is printed to an MP2 transport stream,
15:41
so you get reliable transmissions, or rather reliable transmissions. Obviously, if you want full HD for that, you need an MP4 encoder preferably in hardware. The internal GPUs do that, and it works, but with everything older
16:01
than IvyBridge, I guess it's not working. I haven't exactly checked that. Decalating works quite nice on slow CPUs. Some additional features. The server is done, and you have connections between the two devices and can transmit data.
16:22
As I said, I got that working really fast. There are optional features you can use, like PTP. PTP is used to synchronize the clocks. That's very important for the last of these features, which is split sync. Split sync means you have one audio device
16:41
and one video device, and you send the video only to the audio device, and they are not connected to each other, and you obviously need to sync the clocks between all three. It won't work. That's optional too. Then there's HDCP, which is DRM protection.
17:03
I won't implement that. There's U-RBC, which allows the sync to actually transmit input data back to the source, so you can have a touch screen on your TV, and you transmit video data, and you touch your TV, and actually you can interact.
17:23
I wanted to share a short demo how that works as a client. I used Android. I had that working when I got here, and then I crashed my Android phone, and now it's not working anymore.
17:44
The nice thing about it is that the geo-negotiation that you do during Wi-Fi peer-to-peer setup, which means who's the group operator,
18:01
is rather random. The Wi-Fi display specification mandates that the source has to be the group operator, but if the geo-negotiation as always happens with this driver, just makes the need,
18:20
the sync, the group operator, you cannot do anything. So, yeah. I'm just trying.
18:41
You can actually I'd say it's supposed to be reliable. You can set a group operator intent, which means one device doesn't want to be a group operator. There's many things that you can set just doesn't work, just doesn't do anything.
19:00
I guess it's again into the Wi-Fi problem, but I cannot I cannot say. I actually wanted to show it
20:42
on the screen, but that's not going to work, because my ex-overcraft when I use WG8, I'm just putting it in front so you can see it. That's now
21:17
an X window displaying the data, and
21:21
that's my other phone which I connected. I'll show you how the connection works, but that always takes like 10 differential scripts to get working, so I'm not doing that now. One fancy feature that Android has is doing the password input. It doesn't transmit any data,
21:42
but once you're in your own screen, you can direct directly to it. Like if you're
22:00
in the menu directly it detects screen rotation and addresses screen resolutions, and now you get the most interesting thing obviously is like latency for that throughput and so on.
22:22
And 4HD is actually no problem at all. Works quite nice regarding throughput. Latency obviously, as you might see is I guess below 100 milliseconds but I had no chance to get it
22:40
better. But I actually can play games with it which should show that latency is not as bad as you might think. This one example game is just showing so you can actually see that this is now
23:09
return transmission, and I'm not looking on my screen obviously. Yeah, I'm not that good at it.
23:23
Yeah, audio doesn't work yet. Yeah, okay. I could do that all day long, but I won't. You see screen artifacts you look closely and you notice lags which are in the
23:42
V8 decoder which I haven't figured out why I get these. There is one example where you use Linux
24:00
as a sync, which is quite handy because you can use Raspberry Pi with this mode and it has a hardware decoder for MPEG4 and you can just bring it to your TV, get Pi over USB and then you have just a display sync at your TV. I'm not sure if it works
24:20
but you need to sit down and spawn the shared scripts in the right order and press your thumb. Another thing that is much more interesting thing is to get Linux as a source. And the problem with Linux as a source is that there are many different modes
24:42
that you can use to produce the data. What you could do is just a file which contains a video and you want to stream that quite easy but that's not what most people want to do with it. You usually want to just
25:01
either clone your desktop or use it as a desktop extension so it's actually a second monitor. The problem with that is that you have to write an XOR driver or Wesson or a kernel driver because with Wayland we have the problem that every compositor has to do
25:21
its own F2 write its own driver for that because it cannot rely on the XOR based driver so it didn't go that way. On the other hand, I don't know how to read XOR server code so I'm quite happy that I didn't
25:40
have to. I wrote a kernel driver which just provides a second GPU which is just emulated in user space If you have two GPUs in your cart, they provide
26:01
different displays, but this is just a fake GPU with no hardware installation anywhere that just provides a single frame buffer a single CRTC and a single connector which user space can drive and then XOR can pick up this cart additionally to the user copy use and use it as
26:21
clone mode, use it as a separate monitor whatever whatever it wants and another process, which is the OpenWFD process I've written just reads out this data character-wise and pushes it out into a Wi-Fi to the external display
26:41
and the question was whether it's possible to use render nodes to the question
27:07
doesn't really make sense because the XOR server does get two cards and it already can do GPU render floating so it only uses input into the card or whatever this fake card
27:21
and if it renders for the fake card, it can use the other card just fine without render nodes so it can render an angle and just offload the data to the well, it actually can only do that so you could obviously use render nodes in this scenario too
27:42
yep ok, I'm just trying to set up try to set this demo up but it might be a bit fragile might take a while
29:33
a list of
30:08
would have been nice to see but what I got working with that is you can use this you have to separate the interface and you can use Condo fbmap to bind your Linux
30:24
there's nothing very good but what's working is that you can stream the Linux console from one PC to another one and if you got that working
30:41
well, obviously you can get anything working but that's if you write graphics drivers, you know that it's awful to work with a Linux console and if something works with that, it works with anything ok the topic
31:01
I implemented all this in a repository and as I said it took me 4 months to get this transport stuff working, which is actually not really part of the Wi-Fi display specification because it should be just Wi-Fi P2P and that should be implemented at least people claim that it was implemented for long I just had together the demos
31:21
this week, so it's pretty ugly but I'm working that and uploading again and writing prep posts so people know about it the question is now how to integrate it into distributions or GNOME or other stuff and the way it works is that we first need NetworkManager
31:41
Conman or other programs to provide a P2P API, before they don't do that we cannot do anything about it, because you don't want your Wi-Fi display program to control your WPA supplicant, because it needs to issue commands over that, but you usually want to use your normal AP
32:03
and get internet access, normal network access, and that won't work if the network implementations won't provide that it would be quite simple I have two programs which you can run one is the sync it just
32:22
exits data and the other one the source one would just create this fake kernel device exit would get a hot-black notification see it, and just use it so it's just spawning a command and you could do that easily from any desktop system
32:43
and that's basically it I will try to get this demo working again but if anyone has questions Hello, can you hear me?
33:03
I just wanted to confirm I think you said about the lagging issues that you traced to the decoding part just confirming that you have ruled out networking and bad Wi-Fi The latency obviously is for Wi-Fi, but the short
33:22
hangs you see even if I do local MPEG-4 decoding One more thing before other people ask there are other implementations Apple and Chromecast both of them actually have a one second
33:41
delay and this is intentional in their protocol so I don't think that they are really useful for the same stuff They are also both closed source I didn't try to implement them There is a Linux proof of concept
34:03
for Chromecast I think I haven't used that yet There is a source How did you implement that? Let him ask you a question first There is something similar in
34:22
Samsung TVs I think or LG I tried it with Nexus 4 which is supposed to support Miracast but it didn't work at all I just got connected to it but nothing happened You connected to what? To Samsung TV I think or LG I don't remember
34:42
I told you about some of the WPA subsequent problems and so on I am not the only one having them So if you buy products you have the exact same problems and you really need to be lucky to get it working stuff like the group owner negotiation so that is actually why I tried to
35:02
split both to just say I have this one part which does transport setup and the other part which is the Wi-Fi display thing Now you can create any connection between two devices You can use the SoftAP which has worked for years which works with all these devices and just connect and then set up your Wi-Fi display
35:21
on top of that IP network but the specification doesn't allow that so no one implements it If you intend to buy products with Miracast support I wouldn't do that
35:43
So I was just curious about the buffer sharing implementation with Linux as a source if you had a provision for that so that we avoided copies as much as possible when taking the rendered buffer going over to the network side I simply memory mapped that back into the
36:01
I need to be obviously independent of the implementation of the real driver which rendered the image so what I did is I support the gem import and get the the DMA and the buff handle from the real driver so I have no copy on import on the driver
36:22
and then you have one buffer copy there because I would like to just memory map this buffer into user space and user space can then do whatever it wants with it because it now says this is the frame buffer but we are not allowed to do that on DMA-Buff
36:40
you can only do that in the kernel because you have to call CPU beginner access and end access so I have a super iRobot to just do damage and then you have handle damage and you just copy the stuff that changed actually the same thing that UDL does in DRM you can get access to the
37:02
exact gem handle and then you can pass it to libva, I don't know whether it supports this but then you could even do hardware encoding from the frame buffer without doing a single copy because otherwise you copy into the frame buffer and you copy again to the GPU to get it encoded
37:28
Hi How do you handle color space conversion when necessary and scaling of frames if necessary again?
37:42
Color space conversion? and scaling Ok, pass it into Gstreamer Fair enough I actually got a lot of emails from people I haven't even done any
38:01
coding yet, just created a repository and some files I've got like four emails from companies which were working on WifiDisplay and some of them asked me what my plans are and I was like using Gstreamer and they told me it's bad it's way too slow so I was quite amazed
38:21
this Monday when I got it working it was really fast the first one actually had a two second delay but then I noticed I just used the front parameter and then got it to a pretty nice delay I guess I've got a question about the streaming format
38:42
you mentioned that you were using H.264 over RTP AAC over RTP I'm a little bit confused where the MPEG Transfer Stream comes in I've used some audio and video streams that both have to be multiplexed over one RTP link
39:01
so the audio and video are multiplexed in Transfer Stream and Transfer Stream is encapsulated in RTP exactly thanks
39:21
sorry, could you just my question is just could you just clarify what AirPlay and Chromecast are are they an implementation of WifiDisplay?
39:41
thanks so AirPlay and Chromecast are they implementations of WifiDisplay? AirPlay is one from Apple it's a closed specification and if you're a company you can get access to it and license it but AirPlay is actually
40:01
I don't know that much about it I don't think it uses such people somebody made a port for the Raspberry Pi for AirPlay sorry? somebody made a port for the Raspberry Pi for AirPlay it's not official but it works there's AirCast as I said someone reverse engineered it no, that was Chromecast
40:20
it's actually better I've heard about it, I've never read it the IPv4 crap they didn't do it they just used IPv6 I heard which is really nice can't tell you much about that for Chromecast it's again
40:40
a different protocol it's not the same, it's not compatible and closed source ok do you have some tracks or ID to get Miracast support on HDCP device only? because some are doing Miracast
41:00
but only with encryption and this is a big problem for us you you can implement HDCP the specifications you need a key I think you can get access to keys but I'm
41:22
it shouldn't be that hard to implement there might be implementations already this is my question it should be very easy to use before you create the RTSP link you do the HDCP handshake and that's already it would be just hooking into the setup
41:41
one more shell script that should work I don't have any keys so I can't test won't use it one second
42:01
I just wanted to try the demo
42:39
one more time
42:56
you mentioned the Raspberry Pi earlier I think it's a great device
43:03
this would make sense a lot on Raspberry Pi so I was wondering if you actually tested those things on Raspberry Pi or on any other ARM device running Linux for that matter the thing with the Raspberry Pi is that it doesn't have a
43:21
mini PCIE port so I couldn't use the Instacart and all the other cards are broken with Android but I just used the usual Wi-Fi simple fake IP and then got it working and you can just use it like on any other machine
43:41
so it's not special at all Gstreamer, of course you need the hardware encoding decoding and Gstreamer has what you call OMX, I think something like that which just puts it into the pipelines automatically picked up
44:21
I'm curious about the work that you did with screen mirroring to be able to use Linux as a source to be able to use Linux as a source and I know that with Linux right now there's a really bad problem in the software realm of trying to do live screen sharing or screencasts do you think that's something that someone would be able to tear out of there and use it for that purpose?
44:44
screen sharing means like a live screencast