osmocom: Overview of our SDR projects
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 199 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/32596 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2014135 / 199
2
3
10
11
12
13
14
17
18
19
23
24
25
27
29
45
46
47
48
49
50
51
52
55
56
57
58
65
67
68
70
72
74
75
76
77
78
79
81
82
84
86
87
89
90
93
94
100
101
102
103
104
106
107
109
110
111
112
113
114
115
119
122
123
124
126
127
128
129
137
139
141
143
145
147
150
154
155
158
161
163
164
165
166
168
171
174
175
176
179
182
183
185
188
190
191
192
194
195
196
00:00
Element (mathematics)Network topologyXMLUMLLecture/ConferenceMeeting/InterviewJSON
00:43
MassHypermediaSpectrum (functional analysis)Open sourceDependent and independent variablesPoint (geometry)ACIDBlock (periodic table)Core dumpRight angleAsynchronous Transfer ModeCartesian coordinate systemSoftware protection dongleUniverse (mathematics)AbstractionRule of inferenceField (computer science)Term (mathematics)Absolute valueGroup actionoutputClassical physicsElectric generatorSampling (statistics)TheoryComputer filePattern languageNumberWater vaporGoodness of fitStudent's t-testQuicksortAreaLatent heatMultiplication signStreaming mediaStress (mechanics)Type theoryError messageOrder of magnitudeDifferent (Kate Ryan album)ImplementationProduct (business)FrequencyParticle systemCommunications protocolWebsiteReplication (computing)OracleVisualization (computer graphics)Level (video gaming)Category of beingForm (programming)SummierbarkeitInterface (computing)Set (mathematics)TelecommunicationProof theoryDistanceDirection (geometry)CodeFreewareTunisDefault (computer science)CASE <Informatik>WordSoftware-defined radioMereologyComputer hardwareSource codeSoftwareGSM-Software-Management AGNumbering schemeAuthorizationMusical ensembleSoftware developerKernel (computing)Front and back endsSynchronizationAlgorithmVirtualizationGeneric programmingVariety (linguistics)Physical lawSpacetimeOcean currentSensitivity analysisLecture/Conference
10:40
CASE <Informatik>Multiplication signField (computer science)SynchronizationAbstract Syntax Notation OneDefault (computer science)Communications protocolEntire functionFast Fourier transformUniverse (mathematics)Right angleCartesian coordinate systemComputer clusterOpen setWater vaporEmulatorMereologyProcess (computing)Equivalence relationPatch (Unix)Sampling (statistics)CodecOnline helpFrequencySoftware protection donglePole (complex analysis)BefehlsprozessorGoodness of fitElectronic visual displayAverageDependent and independent variablesCurvatureAsynchronous Transfer ModeCompilation albumPoint (geometry)TunisBookmark (World Wide Web)Spectrum (functional analysis)Software-defined radioComputer hardwareDressing (medical)ImplementationMathematicsProbability distributionBlock (periodic table)CryptographyFlow separationVariety (linguistics)Software developerKanalcodierungReal numberEncryptionData transmissionKey (cryptography)Level (video gaming)Power (physics)Task (computing)SmartphoneLink (knot theory)CodeGraphics processing unitCore dumpVirtual machineLaptopGSM-Software-Management AGInternet service providerProper mapBroadcasting (networking)ModemTheory of relativityFreewareComplete metric spaceComputerFunction (mathematics)2 (number)SoftwareGraph (mathematics)Address spaceBuffer solutionMessage passingNP-hardShared memoryAlgorithmDerivation (linguistics)Device driveroutputPhysical systemReal-time operating systemCategory of beingSatelliteSemiconductor memoryComputing platformDemo (music)Musical ensembleInformation securityTransmitterDigitizingWebsiteInformationSingle-precision floating-point formatLatent heatQuicksortState of matterLevel of measurementSpeech synthesisInsertion lossCodeArithmetic meanContext awarenessBitBoss CorporationInstance (computer science)Line (geometry)Moment (mathematics)Condition numberWordCodierung <Programmierung>Network topologyDistanceSummierbarkeitRoboticsResultantWeb pageTotal S.A.Mathematical analysisGotcha <Informatik>Personal digital assistantSystem callConnected spaceStructural loadInternetworkingForestParticle systemPlotterOrder of magnitudeRule of inferenceData compressionRadiusInformation retrievalFactory (trading post)INTEGRALComputer configurationSoftware testingSolid geometryPhase transitionDisk read-and-write headGodService (economics)Pattern languageTransmissionskoeffizientEndliche ModelltheorieUniform resource locatorReplication (computing)Form (programming)TheoryPolarization (waves)HistogramIdentity managementRange (statistics)TriangleIntegrated development environmentDivision (mathematics)Digital Equipment CorporationTelecommunicationDuality (mathematics)Traffic reportingReflection (mathematics)OpenCLFormal grammarExecution unitCurveFood energyAtomic numberLecture/Conference
Transcript: English(auto-generated)
00:10
Oh, well, if they don't make it after your talk, you've got to make the feed run after.
00:22
There is just no good talk to skip. I took a big breakfast and I just ate something like a tree after all the talk just before the accident. I'll just eat it anyway. Okay, can everyone please settle down and then we can get started on Sylvain's talk. People in here, either sit down or leave.
00:45
Thank you very much. Good morning, thank you for being here. So, a few words about me, in case you don't know me, I'm Sylvain Minot. I'm... speak up, okay? So, my name is Sylvain Minot.
01:02
I'm an engineer and I'm generally interested in developing at slow level. And recently I got interested in everything that's radio and communication protocol and stuff like that. And I work inside the Osmocom project and that's what I'm going to talk about today. So, what is Osmocom? So, Osmocom stands for Open Source Mobile Communications.
01:23
And basically what it is, it's sort of an umbrella project. It's a collection of sub-projects that are all related to the open source and free software implementation of various communication protocols. It all started like in 2010 with OpenBFC. And we needed to create other projects and we needed a good naming scheme.
01:45
And so we came up with Osmocom and the Osmocrifics that we know often use. It was really centered around GSM at the very beginning because it's part of a GSM project. But nowadays we have many more protocols and things that have nothing to do with GSM.
02:05
And a growing number of them is now involving a software defined radio because it's just a very easy way to implement new protocols without having to buy expensive hardware. And so today what I'm going to do is I'm going to try to give you a very short introduction to all projects that are related to SDR.
02:24
We have quite a number of them so it's going to be really a short introduction to each project, hopefully so you can see what interests you and dig a little deeper on our website or other talks that have been done on those projects. Most of the projects I'm going to present have dedicated talks
02:42
just to each of them in the past short introduction. Obviously I'm not actively involved in all of these. A few of these I'm the maintainer of but some others I've never even used so I just try to ask the author what to say about them.
03:04
So I classified them in three main categories. The first one deals with everything that's interfaced with the external world, what I call radio front end. Another category is like signal browsing so you can visualize signals or like standard SDR application.
03:24
And finally really just implementation of a given protocol and how to deal with that specific protocol. So actually SDR is probably the most well known of our project because it really boomed like in mid 2012 or something like that
03:43
when kernel developers discovered that some of those DVBT dongles that you could buy for very cheap had some kind of SDR mode where you can get raw IQ samples. At that point an OSMOCOM developer basically created the library
04:03
so that you could actually use that SDR mode and integrate it into your own project and thus was born the RTL-SDR project and the LEAP RTL-SDR that corresponds to that. So those DVBT dongles, I mean they're really cheap because they're mass manufactured and they're meant to receive PV
04:23
but you have kind of a debug mode where you can use the raw hardware and get the IQ sample and it has everything that a classic zero IF or low IF direct conversion receiver has. So basically the antenna, an amplifier, some kind of filtering, a mixer down to baseband
04:42
where it's sampled into IQ contacts that you can get and feed to whatever algorithm you want. I mean, on the other hand, you kind of get what you pay for, right? I mean, in absolute terms the RF performance is terrible. There are spurs all over the place, the sensitivity isn't the best,
05:02
you only get like 2.4 MSa and even then you're not always sure that you don't have discontinuities in the stream and stuff like that. But it's a great way to get started for SDR with basically no investment. So if you don't have one I definitely recommend that you get one
05:20
because it's also so small you can just have it all the time with you, right? It takes basically no place. Although the performance isn't good, it's still very much sufficient for a lot of stuff. I mean, pretty much all the protocols we've managed to implement using this dongle because as long as you're not working, you're really at the edge of performance
05:45
and trying to dig out signals out of the noise, you're going to have sufficient signal-to-noise ratio and with all the error corrector codes and stuff like that, you'll manage to recover the signal without much trouble. So another project which is less known is the MIRI SDR.
06:02
The first project was meant for the Realtek chip and Mirix, another manufacturer, has basically the same front feature in their chip and so an equivalent library was created. It's more of a proof of concept right now because even though the hardware itself has better spec on paper,
06:23
there are some serious downsides. First, they're kind of hard to get. I mean, back when I bought one of them to test, I had to go through a Japanese mill for water because there was no way to order them here in Europe. They're more expensive
06:42
and the biggest downside of all is the actual tuner so that you can tune to different frequencies all over the frequency range. They have different antenna inputs depending on which band you're in and most of the dongle you can buy, only one of these inputs is connected to an antenna connector.
07:03
All the others are just connected to ground and so you have a very small frequency range that you can actually use even though the hardware can do better and with all those very small QFN chips, it's nearly impossible to go and solder another antenna. If another manufacturer came out with different antenna inputs or stuff like that,
07:23
it would probably be worth investigating but so far, if you have a choice, I'd definitely recommend that you get a virtual SDR instead. Now, of course, these are just raw libraries and we wanted to use that in new radio, of course. So Dimitri here started working on GR Osmo SDR.
07:43
GR Osmo SDR is essentially a hardware abstraction block. It's started only as a source so that you can receive but now you have also a sync so that you can transmit with hardware that supports it. What it allows you is to essentially create a new radio application. You just use that source or that sync block
08:01
and whatever hardware the user has, it's going to work for you. It has a large number of backends. You can read from files that you previously saved. You can use the FunCube dongle, everything that uses UHD, that means all the H2S2P you can use them with
08:21
and also the UMTR8 that supports Osmo SDR, which is some Osmo-com actual hardware but it's not actually available currently. You can support the actual SDR. You can support the Mira SDR. You can support the Acaref, the BladeRF, and the upcoming AirSpy, and some AirF space receiver that are targeted at Arm Radio, I think.
08:46
The support for all those laws has been made possible by all the hardware vendors that have been kind enough to provide hardware so that we can develop and test, which is pretty important and so a big thanks to them.
09:01
That block is now used in a wide variety of projects. I think maybe the most user-friendly one is GQRX. For those that don't know, GQRX is like a generic SDR application where you can listen to FM signal or AM signal and stuff like that. Underneath, it uses new radio
09:22
and among which it uses the GR Osmo SDR source to support all those hardware as input. If you're creating a new radio application, I definitely recommend that you use GR Osmo SDR block as a source. It kind of depends on your application.
09:43
If your application has a very specific requirement that are pretty tied to the hardware that you're using, of course it's not going to work, but most of them are not that tied to the hardware. They just want samples, basically, and this would work great. The blocks actually comes with some sample applications
10:01
so that once you install it, you can actually directly start using it. It basically also comes with a key, which is a quick and dirty spectrum analyzer. You can just close the spectrum using the default clear radio instrumentation stuff. You have a SIGGEN, which is basically equivalent in the other direction,
10:23
where you can easily generate a tune or a motivated signal or even naturally GSM bursts, just to test your hardware quickly. Now we move on to visualization.
10:42
There are two main applications. One is called Strangelove and the other is called Phosphor. This is what Strangelove looks like. You'll see that Phosphor looks very similar and there's a very good reason for that. What Strangelove is, is a standalone application.
11:02
It doesn't rely on new radio. It's kind of an equivalent to GQRX or to SDR Sharp or HDSDR or that kind of application, where you can tune to a signal and just listen to it. One of its main features is obviously its display, which is very nice,
11:20
but in the case of Strangelove, it's also very CPU intensive. Which means it was originally targeted at hardware that could provide only a few mega samples of sample rate, like the RTL-SDR dongles.
11:41
It's been extended to actually be able to use the GR OSMO-SDR. If you have new radio and GR OSMO-SDR installed, you can just use them and then Strangelove will actually use new radio as a sample source, but you can also compile GR OSMO-SDR in some special standalone mode for this application in particular.
12:03
Another interesting feature is the ability to select a piece of spectrum, channelize it and send it to an external application, so that you can use this application to bookmark frequency and channels and tune to them, select just the channel you want and then send that to an external application
12:22
that will then do the processing, like for example a DSD or some other application that expects pre-channelized data at their input. This is faster and, as you can see, the kind of display is very similar and that happened because of the performance issue of Strangelove.
12:44
I don't have the latest laptop and when you get on my Core 2 Duo CPU, I can maybe process 2 MHz of spectrum using Strangelove, and that's just not enough when you have an SDR that provides
13:01
like, later on it's like 40 MHz of spectrum, you have just like 56, and now you get people with 200 MHz and more. So yeah, processing everything on this computer doesn't kill it. So I wrote Phosphor, which is essentially a complete rewrite of the display part of Strangelove,
13:23
but using GPU processing and also designed as a new radio blog, because that's the other thing, I wanted to be able to use it in new radio. And so this is essentially what Phosphor is. It's GPU accelerated, which means, well,
13:42
you need a GPU, and you need a GPU with good enough drivers that they support OpenCL and even more what's called OpenCL-OpenGL Interop, which means the cooperation and sharing of buffers between the actual rendering part and the computing part.
14:00
On OS 6 it means pretty much a recent address should be supported. Everything above the HD 4000. Actually even the HD 4000, but only on my question, I don't know. If you're on Linux, that pretty much means only ATI and NVIDIA.
14:21
There is some effort to support the Intel. I don't know its state, I just know that it doesn't support all the features that are required so far, but hopefully they'll get there, and that would be really great. So I'm hoping to be able to backport that display also
14:42
in Strangelove, and it's also integrated in GROSMOS-DR, which means if you launch the OSMOS-COM FFT demo application with the uppercase F option, then instead of using the default WX widget, you'll use the Phosphor rendering, which looks much nicer.
15:03
So I'd like to take also some time to explain exactly what is in that display, because it looks good, but it's also useful, right? It has some really interesting properties. So at its core, obviously, it's basically an FFT.
15:22
It's real-time spectrum. One particularity is that every single input sample will go through at least one FFT. In the future, we're actually hoping that it will go through several FFTs, and it's important because if you have very short transient bursts,
15:40
like interference or whatever that lasts for a very short time, if you take the approach that's used by the default sync in the radio, they only take like a thousand samples, like 10 times per second, but if you have bursts that last for less than that,
16:00
you might never see them. In the case of Phosphor, you will definitely see them, because every single sample is in process. So they will definitely have them in the water folder, the spectrum display. Compared to some other water folder, like you saw the water folder in the talk,
16:25
the water folding here is much, much faster, in the sense that it displays only a few hundred milliseconds of passed samples, and that means when you have open protocols that change frequency very fast, or when you have things like
16:42
frequency block allocation, LTE, like for example in the screenshot here. So this is LTE signal for those of you who haven't recognized it, and you can clearly see the periodic pilots here, and the block allocation, and stuff like that.
17:00
And you can see that well enough that you wouldn't see in the default scenes. And of course, the pole in the water pole is also displayed in the live spectrum. It's averaged, and again, since we have so many FFTs,
17:22
so many spectra coming back, we can actually do pretty long averaging, which means it flattens out the noise. But even if you average a thousand spectra at a thousand points per spectra, and you have 40,000 mega samples per second, your time constant is still only like 25 milliseconds,
17:44
which means you've got a very flat noise floor, but you still have very much a lot of responsiveness when new signals appear and stuff like that. You don't actually see the average. And of course, the main feature is the histogram,
18:00
which is that kind of display that you get on an high-end spectrum analyzer, which shows a statistical distribution of the signal. And if you go back to the screenshot, you can clearly see that, you can use this, you can clearly see that this is the noise, and you can see kind of the spread of it. And you can see that there is two main power levels
18:21
for the signal. In the middle, there is pretty much nothing, and that's essentially a reflection that either on the _____ there is nothing between the gaps, or you have transmission of blocks, and that kind of thing. So that's pretty much it for signal browsing and display.
18:45
I'm going to move on to a real implementation of protocols. So the first one I'm going to talk about is called GMR, and GMR stands for Geomobile Radio, and Geo stands for Geosynchronous Earth Orbit. And it's essentially a satellite communication protocol.
19:03
It's heavily inspired by GSM. Pretty much everything above layer 3 is going to be GSM, and everything below has just been modified enough to work on high latency links and stuff like that, and to save power, because obviously when you're in a satellite, you don't want to waste power transmitting a constant beacon for no reason,
19:21
and that kind of stuff. Its main user is Turaya, which is a big smartphone provider in Asia. I think there are some usage of GMR protocol in the US, but not really targeted at consumer, more like machine to machine and stuff like that.
19:43
However, I've never had actually the chance to search for any of those. The current implementation is pretty much, it has a complete file,
20:01
complete in the sense that it implements all the channel coding and stuff like that, and TDMA patterns, both for TEX and for RX. The actual SDR part currently is the searching only, but transmitting would have to be probably easy to implement. I mean, it's just, 2PSD is much easier to modulate a signal than trying to recover it.
20:21
So it should go fairly easy to implement. It uses Gnuradio to channelize. It's not directly linked to Gnuradio, basically it compiles independently, but it expects pre-channelized data. I mean, there was no point for us
20:41
to just re-implement channelizer when they exist in Gnuradio. What we essentially do is we use a Gnuradio application that will take a chunk of spectrum, go to a resampler, then a polyphase channelizer block, and ship that to a bunch of named pipes on Linux.
21:02
Each of those pipes actually represent one channel, and those are used by the actual Osmo-GMR application. It will take the samples, demodulate the broadcast, well, search for broadcast channels when it finds some,
21:22
actually follow them, decode the packets, take those packets, forward them to Wireshark, and when it sees channel being assigned, it will actually follow those channels. If you provided the proper decryption key, we'll actually decrypt them, take the voice packets, save them on disk,
21:41
and that kind of stuff. So as I said, we forward them to Wireshark, and there was a good reason for that is that we implemented a Wireshark desector, which means you can actually inspect the different packets and what they contain in Wireshark pretty easily. We implemented both the GMR-specific packets,
22:02
and as I said, it's heavily based on GSM, which means pretty much every layer 3 message there was already a desector in Wireshark that had been written by people, I don't know who, but somebody wrote them sometime, and we were very grateful that they did.
22:21
And so we just reused that. As part of the project, we also, the Cypher itself was really really empowered by us and by the University of Bocken in Germany, and they published their result, and we actually validated them with our own air data. They presented an attack
22:42
on the Cypher, which is really slow. We kind of developed our own, which is really fast. I mean, to give you an idea of the level of security in, you know, sat phones, it takes less than one second to record a creeper key.
23:01
Even when you're decoding your own call, it's just faster to just crack the key than reading it. We had another problem, it's the voice codec. The voice codec is a appropriately AMB variant, and if there are some AMM in the room, you might know D-Star and then all the kind of protocol that they all use AMB variant,
23:22
but they're different enough from each other that you can't actually reuse the decoder. Last December, we actually reverse engineered completely the voice codec, and we now have a clean C implementation of the voice decoder for the codec.
23:41
What we're looking now for the next step in the project is we're looking at GMPRS, which is essentially the internet over the satellite connection, which is completely different protocols and stuff. Maybe adding some take support so that we can actually, you know, start like a sat phone base station on Earth, because for GSM we can experiment
24:01
and buy old hardware, they just don't sell old satellites. Yeah, yeah, shipping would be a bitch. And we're looking into better new radio integration, essentially, so that everything can happen inside new radio, and especially for GMPRS,
24:23
the current approach we're taking with the polyphase channelizer has some limitations. Oh yeah, something I forgot. We actually have a map on the project. We're trying to collect as much GMR data as possible,
24:40
so if you're interested, definitely collect some data and send them to us so that we can collect more information about the broadcast channels, and we're trying to get more data, especially in Asia. And also if you're in the US, I'm pretty sure there's some GMR signal there, but I don't know where, so you'd have to look.
25:00
And since it's satellite, you pretty much need a directional antenna, so yeah. Another project we created is Tetra. Awesome about Tetra. So Tetra sent for third floor trunk radio, and it's a digital trunk radio system that's targeted at government agencies,
25:21
emergency services, and that kind of stuff. It's widely used in Europe in general. I don't know about the rest of the world exactly. I know it's used here in Belgium. If you look around 390 MHz, you'll find what's called Airstreet here in Belgium, which is the Tetra network used by police and stuff like that.
25:40
It's encrypted since we published just more Tetra, pretty much. When we started it, it wasn't, and then we kind of tweeted that it, oh my god, it's not encrypted, and like six months later, they encrypted it. If you go through Brussels Airport, you can look at around 410 MHz,
26:00
there's an unencrypted Tetra network there, just for the airport people there as a player, and they all had this Motorola radio. So it's not a device, it's pretty much the same thing. You have a PHY and MAC implementation as a separate application. We use the radio for both for channelization
26:20
and the actual demodulation. We actually use the demodulator from another radio project from up to 25 because it's for FSK and for FSK demodulator. Again, we use named PIPE between applications to pass that around. We have a Wireshark detector that was
26:47
actually kind of generated by us, but we didn't do most of the work. The university in China somehow described the entire Tetra protocol in ASN1. And then from ASN1,
27:00
we could generate automatically a package that is actually from Wireshark. We're very grateful that they wrote that and that they let us use it. That was really nice, because if you haven't written Wireshark detector, it's really boring. I mean, it's basically describing every little field with an help text
27:21
and a name. We have voice support. In this case, the codec is actually public. There's a reference implementation. It's not free software, the license. I can't take it and redistribute it. But we have something that essentially downloads it, patches it.
27:42
It's not in master yet, but there will hopefully be some work done on it to make it more user-friendly. And the last project is Op25. Op25 is actually a project that didn't start as an OSMOCOM project, and it kind of joined us afterwards,
28:02
and we kind of owes them. I've never used it myself, mostly because Op25 is kind of the equivalent of Tetra, but for other parts of the world, mainly in the US and Australia and Canada. So if you're from there, you should definitely check it out,
28:21
because you most likely get some P25 radio signal there. It's entirely based on the new radio, and it's pretty complete. I mean, they have everything from the emulator to the complete protocol stack to implementation of the codec, and they also implemented the Wireshark packet detection and all the crypto stuff. They've recently updated it to,
28:41
they've switched from SVN to Git, and at the same time, they've made some architectural changes, including changes to the latest DR 3.7 API. They also use some of the more advanced features of GR OSMOS VR for tuning and stuff like that, so yeah. It's definitely a great project,
29:04
and definitely a good example to follow. So that's pretty much it. I'd like to first thank you for your attention, and thanks every developer of OSMOS.com that actually works on those projects,
29:20
and in general, in all the SDR projects, we need more people working on these stuff to hopefully both make them more advanced and also make them more accessible to a wider audience, and there's really a wide variety of tasks to be done, and not only R code DSP stuff.
29:40
Yeah, if you have any questions. Is there a way to actually visualize, easy visualize, small packets which can't operate?
30:00
The problem is you do a water graph. You only have packets sometimes. Part of it's slow, and then you have packet size of that, where you don't see detail. Oh yeah, it's fast, and then by the time you have... I know. Is there a way to trigger? No, there is no trigger yet, but I was talking about doing things like spectrum-based triggering and that kind of stuff,
30:22
but it's really advanced kind of stuff that I... I definitely would like that because as you said, there is no good compromise. Either you just don't see them, or they just appear as a line. Currently, I'm planning on adding a pose mode so that you can at least pose the spectrum and inspect it and close the look,
30:41
but automatic triggering would definitely be good, but it's not there currently. Another question for the software government, but for the hardware kind. Do you support something that can transmit, say, 20 watts per minute? I mean, yeah.
31:01
I'm trying to do it legally. Don't get me wrong. Not directly, I don't... Two megabits of second on UHF or half a megabit on... I mean, you can do it, but usually it's not really our problem in the sense that you will connect to SDR and then that SDR will transmit a signal. If you want to transmit 20 watts, you're probably going to want to filter that signal at the output of the SDR,
31:21
then go through possibly a preamplifier, then another filter, then finally the power amplifier, but most of the SDR I know don't have 20 watt power transistor on them. There is one. It's the HP SDR. It has a PA of 21. Ah, really? Yeah, the HP SDR was... That's an amateur radio project basically,
31:42
so yeah, they're interested in that. Ah, really? I don't... Well, real stuff costs money. He says that it's free of the resource. Quick question. Is your project related to OpenBTS in some sense?
32:02
Not that much, actually. OpenBTS, there is some relation in the sense that... So OpenBTS is mainly two applications, I'd say. One implements like Layer 2, Layer 3 and stuff, and one is like the actual radio modem part,
32:23
and the radio modem part has kind of been split off. It's maintained by Thomas Tzu, which I saw, I don't know if he's here. Yeah, he's there, in the back. He's maintaining the actual SDR part of this, and I mean, although we know all this on our Git,
32:44
and this is what he maintains, I just didn't include it here, because it's still mainly the work of the OpenBTS guys who did it originally. Yes? What is currently known about the Tetra encryption?
33:04
Nothing. I mean, really, not that much. The specs don't specify it. It's very hard to test, because it's very hard to get actual Tetra hardware
33:21
that both supports encryption, and that you can easily play with. I mean, we have some Tetra radio, some of them actually support the encryption, but we have absolutely no way of loading encryption keys in them. We have no idea how it works, because you just can't get the documentation. It's usually done at the factory,
33:40
or provisioned through a big system. So basically, we just know that we don't really know much, and certainly not enough to do anything about it. Yes, and we can do that for the radio, too.
34:00
The problem is, in Tetra, it's not as... I mean, in the SAP phone case, in the spec, you basically add a nice block schematic, and everything was known, except for the actual encryption algorithm, right? But its input and its output were all specified and stuff.
34:21
In Tetra, you have the encryption algorithm that is secret, then you have the key derivation algorithm that is secret, and there's a bunch of them that are secret, and even if you reverse-engineer the stuff, you have no idea what is what, and without the ability to actually test on real hardware to confirm that what you found is actually the real algorithm, and you can actually decrypt the bits,
34:42
you're just basically doing work blind. And so if any of you has any contact in Tetra that can provide us a radio that can transmit with a known key, that would be really helpful. Yeah? Two quick questions about Tetra. One is, do you use Tetra as simply a packet data radio?
35:04
Yeah. Are there any patents that apply? I have no idea. Honestly, I just gave up on patents. When Google bought Motorola, do you know if they got the patents for Tetra? I have no idea. Sorry.
35:20
As I said, a lot of stuff are patented. Yeah? Okay. Yeah. I figured pretty much all of them have patents, and if I stopped implementing stuff because they're patented, just... Okay. And the second question is, do you know if there is some sort of EU-directed
35:41
deregulating Tetra in the future for hard assembly? Oh, so that you can transmit Tetra on bands? Because it's very strictly regulated right now. Yes, yes. I mean, no, I have no idea. I think, I mean, when you buy Tetra radio, you can buy them for different bands.
36:00
I don't know if any of them actually overlaps with... I mean, one of them is like, one of the Tetra bands is near 433, right? But I don't know if it's... I mean, just because the frequency is right doesn't mean you actually have the right to transmit on it, so... Yeah. Legislation and me, it's okay.
36:23
Okay. Yeah, sure. Yeah, yeah. In green? I have one question. You mentioned that you're using the GPU inside the radio and the GR platform for the FFP. Yes, yeah. And how much work would it be to adapt your code that the GPU is doing something entirely different?
36:43
So you have... It wouldn't be that much work, but it really... I mean, okay. So currently what I'm doing is I'm taking the samples, I ship them to the GPU, then I never see them again. Because transferring data from the main memory to the GPU takes a lot of time.
37:02
I mean, for the iron graphic cards that can process 200 megahertz of spectrum, copying the data is like 90% of the execution of the time, right? And so I'm not going to copy a big chunk of data, then do an FFP and then copy them back. FFPW is just faster. If you are doing a lot of stuff,
37:23
then it would be worth it. Yeah. If I can jump in, we do have a project called GRGPU. You can find it on GitHub. That is basically what you're asking for. And it basically has a block that moves data to the device memory. Then you write your computer radio blocks in GPU code,
37:42
which was CUDA, and now they support OpenCL. And then you have a block that brings it back from device to host memory. It's not perfect. There's still a place to go, but there is support. Okay. Yeah, sure. Do you have plans for LTE and LTE safety, which is the next change?
38:03
Plans? Not exactly. I mean, we usually don't plan that much. We just, you know, oh, that looks interesting. We're going to look at it. But most of the time, that kind of implies that we have hardware or something to look at, beside the spec,
38:20
because just reading specification is also not that interesting. So at least I like to look at signals and the spec in parallel. So until... I don't actually have LTE in Belgium. So... Yeah. Sure. When you talk about digital radio, software-defined or other software,
38:44
you mentioned here Ambe and Vista codecs, yeah? Yes. Low-standard closed codecs. We have here in Europe digital mobile radio. Open standard is approved. It's having all the regulatory and the technical setup
39:02
is a goldmine for software developers. Have you considered having it in Europe to the European... Yeah, but most of the time we... I mean, when I started the GMR project, I thought there was a reference for the implementation of the spec because every other protocol I did before, there was a reference codec.
39:21
And so it's only when I extracted and took the voice packets and then, okay, I go to the GMR site and I look for the reference codec and just can't find it. What the fuck? So that's only at that time that I realized that I needed to actually reverse-engineer the codec. Yeah, but if I had the choice, I'd take open codecs, of course. Since I'm implementing existing protocols,
39:42
I take whatever they chose in the spec, right? Yes? So you used DSD before the spec process? I've never really played with DMR. I just know there is a GRDSD block. I tried it a couple of days ago
40:02
and it works. I tried it for D-Star and it worked, but yeah, I don't have any DMR radio to try, so... Okay, I think we're probably over time already, so... Thank you.
40:21
Oh yeah, that's nice. Yeah, just thank you. Thanks a lot. You're welcome. Bring more chocolate. Chocolate? Don't worry.
40:41
Don't worry when he yells at you. It's okay right now. It's okay right now.
41:02
Thank you.
41:43
Thank you.