We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback
00:00

Formal Metadata

Title
libLTE
Subtitle
An Open Source LTE Library
Title of Series
Number of Parts
199
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
libLTE is a free and open source LTE library for SDR mobile terminals and base stations. The library does not rely on any external dependencies or frameworks. The project contains a set of Python tools for the automatic code generation of modules for popular SDR frameworks, including GNURadio, ALOE++, IRIS, and OSSIE. These tools are easy to use and adapt for generating targets for specific platforms or frameworks. libLTE is a continuation of the Open-Source LTE Deployment (OSLD) project. OSLD provides an LTE library together with ALOE++, a real-time SDR framework. libLTE builds upon the success of OSLD and provides complementary tools for researchers and manufacturers that do not wish to use a specific SDR framework. More info and download on ALOE++ and OSLD: https://github.com/flexnets/aloe/wiki
65
Thumbnail
1:05:10
77
Thumbnail
22:24
78
Thumbnail
26:32
90
115
Thumbnail
41:20
139
Thumbnail
25:17
147
150
Thumbnail
26:18
154
158
161
Thumbnail
51:47
164
Thumbnail
17:38
168
Thumbnail
24:34
176
194
Thumbnail
32:39
195
Thumbnail
34:28
State of matterMultiplication signProjective planeRight angleNatural languageSoftware frameworkSummierbarkeitLipschitz-StetigkeitLibrary (computing)Open sourceModule (mathematics)
Musical ensembleAreaCASE <Informatik>Data structureProjective planeCross-correlationRight angleImage resolutionParameter (computer programming)Block (periodic table)Level (video gaming)Sign (mathematics)Category of beingThermal radiationMereologyInteractive televisionComputerSequenceComputer programmingMultiplication signSynchronizationEndliche ModelltheorieoutputLibrary (computing)1 (number)Inequality (mathematics)Data miningState of matterSocial classLatent heatHydraulic motorPhysical systemOrbitInferenceSemiconductor memoryCartesian coordinate systemElectric generatorFunctional (mathematics)Domain nameSpring (hydrology)Software frameworkLengthForm (programming)Mathematical analysisRule of inferenceDistanceLoop (music)Wave packetTheoryOrder (biology)Sampling (statistics)Event horizonConvolutionInstance (computer science)Open sourceRing (mathematics)StatisticsComputer configurationRow (database)Computer fileWorkstation <Musikinstrument>Food energyCodeStability theoryNoise (electronics)Real-time operating systemCharge carrierPortable communications deviceObject (grammar)GradientFrequencyWebsiteEstimatorMoment (mathematics)NumberStatement (computer science)OracleAlgorithmImplementationMedical imagingStandard deviationVideo gameSelectivity (electronic)Bounded variationTheory of relativityCollaborationismSpeech synthesisPlug-in (computing)DatabaseBitEscape characterSet (mathematics)Web crawlerSimilarity (geometry)Condition numberGreen computingModal logicContext awarenessPower (physics)Density functional theoryInterface (computing)Mixed realityMathematicsFunction (mathematics)Resampling (statistics)Computer hardwareCellular automatonTime domainWaveformComputing platformRadical (chemistry)Software-defined radioAngleFraction (mathematics)Computing platformScripting languageBroadcasting (networking)PhysicalismRun time (program lifecycle phase)FeedbackOscillationGreatest elementSoftwareProgrammer (hardware)Module (mathematics)Directory serviceTransmissionskoeffizientDiagramTurbo-CodeLecture/Conference
Transcript: English(auto-generated)
We should give away. We should have a picture of him giving away. It's a die and do it free. He's free to bag apples. He's like, you know you can talk to people, but don't give away all the information.
May I? Ismail, please. My name is Ismail. Nice to see you here. I come from the CTBR Research Center.
I'm going to present this LIP LTE. It's a modular open source library for LTE. There's other people involved in this project. Paul Sutton from the CTBR, too. And Book, who is with Virginia Tech. And Tony, who is from UPC in Barcelona.
LIP LTE is a very new project. We just started this in January. It's a library for LTE. With a library, we mean that it's not an application. We don't want to make eNodeB a base station or a user terminal.
We just want to provide a collection of modules that then you can use to implement your base station. But, of course, we will provide a reference implementation of a base station and a mobile terminal.
It's a modular library. With modular, what we mean is that we want to be able to, if you're only interested in one of the modules of the library, to be able to pick it and use it in your application. For those of you who usually use libraries, you have found probably that sometimes it's very difficult.
Maybe you're interested in only a piece of the library and it's very difficult to get it and use it in your own software. Also, we want it to be computationally efficient. That means that we don't want to just implement the standard in a naive way.
We want to be clever. For instance, if you have to make a convolution, you can use the FFT convolution, these kinds of things. Also, one of the objectives is to avoid dependencies with other libraries or frameworks. To facilitate the portability to embedded systems, for instance.
What is true, and we have seen it today, is that Ginyu Radio, for instance, has a huge number of users. If you want your application to be used by other people, you have to provide some way for these Ginyu Radio users to use it.
Also, we have other SDR frameworks. We have seen Iris here, but we also have Aloy, I will talk a little bit after, and SCA, which is a military framework for software radio.
We want somehow to enable these framework users to use this library. In this table, I summarize the related projects on LTE. The first one is the MariSoft software.
It's probably the best software on LTE that we have today, but unfortunately, it's proprietary. It is able to run in a quad-core in real-time with two antennas, 20 mAh bandwidth, and they implemented the eNodeB as well as the MME.
Also, it works on the end-to-end user, but unfortunately, it's proprietary. We also have the OpenLT. As far as you know, it's also quite complete. It implements the PHY and MAC layer and partially the RC layer, but unfortunately, it's not modular.
If you are only interested in some of the DSP blocks, it's very difficult to isolate because there are many dependencies between models. We also have the GLT, but also as far as you know, it only implements the synchronization and decoding of broadcast signals.
PVCH is the physical broadcast channel in LTE, and it also uses Boost library. We also have this LTE cell scanner, which doesn't use a framework,
but also only implements the synchronization and decoding of the broadcast channels. And it uses IT++. IT++ is a very good library, but when you move to an embedded system, it's very difficult that it can run it. And finally, there's also SLD, which is a project I was involved in.
This was also an LTE downlink application for this alloy framework, but this framework was very focused on distributed real-time processing, and so it wasn't easy to use for the people.
So, yes, why do we want another LTE library? The first thing is that we want, as I said, to enable the reuse of models. We want to enable portability. C++ or Boost are difficult to find in some embedded platforms.
And we would like to avoid the necessity to use a framework. If you want to use it, you can use it, but sometimes it's too much overhead, and also it's very hard to move one model from one framework to another.
And finally, efficiency. As I said, it's not only implementing the standard. So we have this objective, modularity, we want to be able to generate code for frameworks, and we want efficiency.
And the design that we approach is a two-layer interface for the models. At the low level API, we don't impose any rule to the designer. The designer can implement the DSP block as he wants.
And then we define a high-level API where we do specify some rules. Finally, the idea is not using all the well-known libraries like FFTW or bulk, but if they are available in the system, use them.
This is an example of how the low-level API is defined. As you see, there is no rule. The programmer defines his initialization and function to allocate the memory.
You can set options, you can get options. Typical interface without any restriction. But then you have this high-level interface where we do specify these three functions, for instance, that you typically see in all the frameworks.
All the frameworks start with an initialize function, and then they have a work function, which is called periodically to perform the DSP, and then a stop function.
The idea is that the DSP designer has to implement these functions, calling the low-level API functions. Also, we define this structure, which needs to be followed, where you can see that the designer specifies the parameters that configure the model at the beginning.
And then some runtime input or output parameters that this model accepts or produces. All this is defined in a .h file.
And from this, using the Python C library, we parse this header, and using a Python script, we convert this model specification to a general XML file. Then from this XML file, you can implement using another script, you can generate automatically the C or C++ files that your framework needs.
For instance, now we only have this Python script that generates the model for this framework.
But it shouldn't be difficult for someone to implement this Python script to convert the XML file to a GNU Radeon model. So the idea is that with the same library of models, we can generate code for very different frameworks.
How do we work? Typically, here I try to describe how we are working with this library. You start with a MATLAB model, and we use a reference base station, which happens to be the Microsoft, because it's much cheaper than a professional signal generator.
And then we implement our receiver code of a specific functionality. Then we move to a live base station, a live signal that we capture from the base station, and we check that our receiver code is working properly.
And finally, we make the transmitter, we check against the receiver that we know is working with the standard, and we verify that the signal generated by the transmitter is the same than the others.
This diagram shows a little about the model structure of the library. At the bottom you have small modules, each implementing a DSP block. For instance, the binary, the scrambling, the ring matching, or the CRC. These modules have minimal inter-module dependencies.
So if you are only interested in the pre-coding of the LTE, you can take this file and move it to your application. On the other hand, as you move up in the layers, obviously you start finding more dependencies.
For instance, the synchronization algorithm uses this set of files for decoding the primary synchronization signal to estimate the channel. So if you aren't tested on the synchronization, then obviously you find some dependencies.
Well, this is the directory structure of the library. We organize the blocks in categories, and there is a single .h file that the programs using this library need to include.
What we've done so far, we implemented a real-time self-search program. It scans an LTE band and looks for basic stations and synchronizes with them.
Later, if we are lucky, we will try to find some LTE basic stations. I don't know if we will find some. The LTE synchronization uses two signals which are sent by the basic station.
The primary and the secondary signal, which are transmitted every five milliseconds. So the receiver needs to perform a correlation with three possible PSS sequences every five milliseconds.
This is very complex, so you need to try to be clever on how to implement this. Otherwise, you will consume all the resources only for synchronization. You also have to do carrier frequency offset estimation and, in most of the cases, sampling frequency offset estimation.
The CFO, you can estimate it from the cyclic prefix or from the PSS. What we do is to use the PSS because it's more accurate. To estimate the sampling frequency offset, what you do is you observe the variation in the times that the PSS has been sent.
Finally, this is what the self-search program does. It scans an entire band. First, we look at the receiver signal strength. For those bands where we observe that the signal power is high enough, we try to correlate the PSS.
If you find the PSS, then we track the signal for end frames and we estimate the CFO and SFO and the cell ID. To implement LT, LT is a very complex waveform.
If you want to be able to run it in real time in a laptop, it's very hard. To start with, the sampling frequency is a problem because not all hardware has the 3.84 clock. For instance, the M210 doesn't have it, so you need to resample.
Resampling is very expensive. But as Tom pointed out before, the FFTW has a very good performance even if the length of the DFT is not a power of two. So maybe one of the solutions is not sampling and the clock that you should sample, but then not changing the length of the DFT.
Another problem is the synchronization. The synchronization is transmitted in a sample at almost 2 MHz, 1.92, but the signal only uses 1 MHz. So you could sample at 1 MHz, exactly 945.
Also, using a time domain correlation is very expensive. You should use a DFT-based correlation. And then once you find the peak, narrow your search only to where this signal is supposed to be.
The coding is also very expensive, especially a turbo-coder, and it needs a very high throughput. So this is something that should be done, probably, to use SIMD instructions.
Bulk, for instance, exactly. But implementing a turbo-coder with bulk probably is not easy. And also, multi-user detection is also a problem at the base station, especially synchronization and detection. This is a little bit the roadmap of this project.
We started with the downlink. In most of the physical channels of this downlink, we already have some code from this OSD project that it was involved in before. And this code can be reused.
For the uplink, also, most of the parts are the same in the downlink. It can be reused. But there are still many things to do, like the SIMD decoder, the MIMO, and integrating all the pieces.
To summarize, liblt is a library. We aim at modularity. We want to be able to use this library, only parts of this library or all the library in embedded platforms. With no framework dependencies or third-party libraries.
And I take the opportunity to call for contributions, either in MATLAB models or C implementations are welcome. If any of you is working on LTE or something related, any kind of implementation is welcome. And that's it. Thank you very much.
Any questions? How do you correct the sampling frequency offset? Actually, now we don't correct it. Because for the synchronization, the correlation is robust against the sampling frequency offset.
We are able to detect it. And we will show in the demonstration later that we are able to detect it. But correcting is very complex. Especially if the offset is very small. But yes, this is a problem.
Are you planning to do this or just resynchronize every frame? Yeah, exactly. If the offset is small enough, you can resynchronize every framework. You just lose some SNR.
Any other questions? Can you repeat the question for the audio?
I think he asked if we estimate the CTO and if it's related to the offset of your oscillator. Was that the question? Is there a lead by the reference oscillator in the receiver instead?
If the feature hardware platform already does it, you use a VCPCXO instead of a normal oscillator, so you could control the frequency a little bit. And then you could solve both of these problems at the same time. You could just tune the reference clock of your receiver. First of all, you cannot do that with all the receivers.
And the second thing is that in LTE, correcting the carrier frequency offset is very easy. Estimating it is very easy, very simple computationally speaking.
But correcting it is also very easy. The sampling frequency offset is a little harder. But as we said, if it's small enough, it's only synchronizing every five milliseconds and you solve the problem.
To estimate the CFO, correcting it is very easy. You multiply by a phase. And to estimate, you have two ways. You can use the cyclic prefix. And what you do is you look at the angle of the output of your correlation.
Because you are transmitting the same sequence, so you look at the phase here. And if you have already synchronized with the primary signal, this PSS sequence, which is transmitted by the base station,
you know how this sequence looks like. There are only three possible sequences. And once you have correlated and you are synchronized in the time domain, at the output of the correlator, you also look at the phase shift that you have at the output of your correlator.
And this gives you the CFO. It is a standard approach. It is used by most synchronization techniques.
If you look at that paper. For the SFO, you look at the variation. When you synchronize, you know the time offset.
So if you look at the time offset every frame, you get an estimation of the SFO. If it is very small? If it is very small, you need more time. And then you go out of sync? No. If it is small, you don't go out of sync. Exactly. If it is small enough, you only synchronize every five milliseconds.
But you don't need to estimate it. If you want it, you could do it. But then you don't have to resynchronize and just estimate in, let's say, 100 frames, which is the offset. But if it is that small, correcting it will be very complex.
And it is a huge fraction of the sample. Then you probably don't need it, because your noise will spread it so far that it doesn't matter anymore. And you still have the phase noise of those small drifts you have.
You will not see it. There is not working with that. That is what you are talking about. You estimate for one frame the frequency offset, and you think it is in sync for the frame. No. I mean, you can do it, but your estimation is not very good, because it is subject to variation.
We were talking about sampling frequency offset, not carrier frequency offset. Ah, I knew I was to do this. There were two different questions. No, but for the carrier frequency offset, that is also true. You have to average over, I don't know, 100 or 50 frames.
Each frame is 5 milliseconds. And the more accurate you want your CFO estimation, the higher the delay on synchronizing with the network. But there is no feedback. It is only open. Yeah, there is no feedback. OK.
I will move the other questions to the open session. I would like to thank Ismael again.