We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Decoding Meteor-M2: QPSK, Viterbi, Reed Solomon and JPEG

00:00

Formal Metadata

Title
Decoding Meteor-M2: QPSK, Viterbi, Reed Solomon and JPEG
Subtitle
from IQ coefficients to images, analysis of digital weather satellite transmissions
Title of Series
Number of Parts
561
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
A low cost, digital video broadcast-terrestrial (DVB-T) receiver is used to collect radiofrequency signals emitted from the low Earth orbiting Russian satellite Meteor-M2. The QPSK encoded signal is analyzed all the way from extracting bit values, to recovering the JPEG encoded image transmitted from the satellite. This investigation is an opportunity to experimentally assess all the layers of digital communication widely used from Deep Space communication to daily mobile phone communication, including Viterbi encoding, Reed Solomon error correction, and JPEG image display. Few members of the audience might have any interest in the details of Meteor M2 weather satellite transmissions. However, tackling the reception of this digital weather satellite opens the opportunity to address most if not all the layers of the OSI model, from the physical layer by collecting the radiofrequency signal using a cost-effective DVB-T receiver acting as a general purpose software defined radio signal source, to the data link layer with the various error correction schemes implemented to address the corruption introduced by the noisy radiofrequency communication channel (Viterbi, Reed Solomon) and the network layer with the frame encoding including telemetry and, of course, the payload as a digital picture. The latter is encoded in JPEG format, adding more abstractions with the lossy compression to be reverted to display greyscale images representative of the atmosphere and ground reflectivity in the various wavelengths monitored by Meteor M2. This decoding path matches most recent space-borne signal transmissions, as documented by the Consultative Committee for Space Data Systems (CCSDS [1]), and despite extensive documentation available online, a practical demonstration of the various decoding steps helps understanding the many documents over which the information is spread.
Computer-generated imageryCoefficientMathematical analysisDigital signalViterbi-AlgorithmusSolomon (pianist)Slide ruleMaizeElement (mathematics)Celestial sphereFrequencyImage resolutionSpectrum (functional analysis)OrbitChemical polarityComa BerenicesSineData compressionSpacetimeWeightEmailRevision controlDialectOrbitSpacetimeSlide ruleGreatest elementSoftwareElectric dipole momentPolarization (waves)Noise (electronics)Signal processingBitContext awarenessProcess (computing)WebsiteFunction (mathematics)Computer hardwareAdventure gameDegree (graph theory)Information managementComputer fontMetrePhysicistData transmissionAddress spaceStreaming mediaCircleInternetworkingHorizonInformation technology consultingPhysical systemLibrary (computing)TrajectoryInheritance (object-oriented programming)Raw image formatAnalogyMessage passingData streamComputer animation
Continuous functionControl flowSoftware development kitHTTP cookieOrbitNumberMultiplicationMeasurementSpacetimeRadiusWorkstation <Musikinstrument>TrailMetreCirclePhysical systemDisk read-and-write headSoftwareComputer animation
Bus (computing)TrailFrequencyInclusion mapSet (mathematics)Maß <Mathematik>InformationLogical constantBlock (periodic table)AutocorrelationTask (computing)EmailSynchronizationSample (statistics)Term (mathematics)Database normalizationFunction (mathematics)outputCross-correlationViterbi-AlgorithmusConvolutionCodierung <Programmierung>Product (business)InterleavingSequenceVirtual machineError messageWebsiteDegree (graph theory)Exclusive orPower (physics)Spectrum (functional analysis)State of matterMessage passingExpressionMeasurementResampling (statistics)Loop (music)Charge carrierTrailFrequencyCoefficientEmailData streamPhysical systemMereologyShift registerVideo game consoleComputer configurationModule (mathematics)Focus (optics)Validity (statistics)Moment (mathematics)Local ringMultiplication signPhysicistDatabase normalizationBitRandomizationBranch (computer science)NumberFinite-state machineSampling (statistics)outputFunction (mathematics)ConvolutionShift operatorMatrix (mathematics)Cartesian coordinate systemDescriptive statisticsAlgorithmInformation2 (number)PolynomialError messageStreaming mediaRadio-frequency identificationData recoveryData transmissionInternetworkingFeasibility studySoftware bugPatch (Unix)MetreMathematicsWordTap (transformer)CAN busConvolutional codeDrop (liquid)EvoluteRational numberOscillationComputer animation
Convolutional codeSynchronizationViterbi-AlgorithmusConvolutionMeta elementoutputComputer configurationSymbol tableRotationCross-correlationElectronic visual displayMaxima and minimaBlock (periodic table)Computer wormSelf-organizationSolomon (pianist)InterleavingError messageConsistencyInformationFrame problemTable (information)Cross-correlationError messageViterbi-AlgorithmusoutputInformationCodierung <Programmierung>MathematicsBitExtension (kinesiology)Block (periodic table)Arithmetic meanSynchronizationSequenceWordEmailVolumenvisualisierungMultiplication signVorwärtsfehlerkorrekturSet (mathematics)AuthorizationConvolutional codeShift registerSource codeSymbol tableDifferent (Kate Ryan album)Streaming mediaHypothesisProcess (computing)RandomizationInitial value problemBlock codeCombinational logicNormal distributionShift operatorPosition operatorPoint (geometry)MereologyCASE <Informatik>Function (mathematics)Data recoveryHoaxComputer fileThetafunktionLipschitz-StetigkeitValidity (statistics)Uniqueness quantificationDegree (graph theory)Phase transitionBit error rateNoise (electronics)State of matterComplex numberSlide ruleMetreSound effectInterleavingQuicksortMessage passingComputer animation
InformationData typeCurvatureComputer-generated imageryLibrary (computing)Frame problemTrigonometric functionsHuffman codingDiscrete groupSinguläres IntegralGeometric quantizationCoefficientSoftware-defined radioBlogSpacetimeProcess (computing)TheoryVorwärtsfehlerkorrekturData compressionPhysical systemImage registrationFreewareSequenceInformationComputer wormBitLine (geometry)WebsiteMultiplication signStandard deviationProcess (computing)Moment (mathematics)Wave packetSystem callFunction (mathematics)Communications protocolAnalogyFrame problemHeegaard splittingSpacetimeAddress spacePattern languageEmailThumbnailGeometric quantizationParameter (computer programming)Level (video gaming)File archiverImage registrationAreaInternetworkingComputer fileCoefficientCodierung <Programmierung>Theory of relativityMatrix (mathematics)Computer scienceFrustrationRepository (publishing)Web pagePoint (geometry)MultiplicationMereologyMatter waveWeb 2.0WeightPresentation of a group2 (number)Computer animation
Point cloudComputer animation
Transcript: English(auto-generated)
OK, thank you for being still here for our last talk. So this last talk would be about decoding meteor M2. So what I would like to emphasize is I assume that most people in this room don't care at all about meteor M2.
The topic here I would like to address is to use meteor M2 as the reason for addressing all these fascinating topics. I should emphasize, I'm trying to be a physicist. I've never been taught about any of these things here. So I discovered everything by myself. I found it very fascinating to be discovering all these signal processing techniques
that you find in the various documentation about digital communication. I want to show you how getting from the raw data, QPSK, all the way to a JPEG image, as you can see here, will be addressing this talk. So Martin did not care to give you a talk with two semesters of signal processing in 20 minutes. I do.
So also I should emphasize, if you just want to get the image, you can go out of room because this meteor decoder is doing a much better job at what I'm going to show you. This is working very well. I want to go step by step into what is going here. So really for me, the topic is understanding all these things, not using a readily available software.
So that's the topic of my talk. So I'm lucky enough to be going twice every year in Arctic regions for glacier monitoring. And so I'm lucky enough to be following all these polar orbiting satellites, so low Earth polar orbiting weather satellites, including, amongst the various little
satellites, the one from the Russian meteor. So why are Arctic regions most favorable for this kind of monitoring? When you have solar synchronous satellites, they will be in the polar orbit, 98 degrees. So 90 degrees would be right over the North Pole.
And one of these polar orbiting satellites, here I've plotted a trajectory over one day. You see that when you're in Western Europe, here it's Bezant-Sous in France, the green circle is the place you would be if you wanted to see the satellite horizon. This is at an altitude of elevation of 15 degrees. This is at elevation of 60 degrees, below which I don't even bother to take my antenna out.
In France, you will get at most one pass every day of one of these polar orbiting satellites. While when you go to an Arctic region where you've got all these passes. So it's fascinating because when you're learning to decode a new satellite, here you have like 10, 12 passes per day, while here you have one at best.
So I'm investigating METOR-M2 in this context and also using the little RT-LSDR receivers because, of course, when you go there, you're not supposed to take a big bag with full of hardware that's not related to your research. So here I can just put one of these little receivers at the bottom of my backpack, just to find any two wires to make a dipole once you arrive there,
and you have your setup for receiving METOR-M2. So I'm sure many of you here in the room have already listened to NOAA. So the NOAA satellites are a dying breed because NOAA is no longer renewing their constellation of analog satellites. They started in the 70s. Now they are at NOAA in 19. I think it will go up to 21.
And then they'll stop the NOAA. So now we have to think about the future. And the future is digital communication. So digital communication is what is provided by CCSDS. So because the last speaker could not say, it is the Consultative Committee for Space Data Systems. And this is basically a body trying to standardize this communication.
So that's a bit of a layout of what I want to talk. So when Bastien was showing you, I have my JPEG picture, then I have TCPIP, then I have TCP, then I have IP, you've got all your OZ layers. And of course, when you want to decode a JPEG image on a Firefox, well, you've got all these libraries for you. And for me, the exploration here was,
I collect this QPSK data at the output of a new radio, running the RTL-ASDR data stream. And then how do you go from this QPSK all the way to a JPEG image? So for me, it's a little bit like if you were trying to listen to a JPEG image being transmitted in HTML with just an oscilloscope connected to your internet cables.
And for me, it's an adventure to try to get all the layers one after the other. So of course, in this talk, I don't claim to be going in detail. I would like to show you the outline. The slides will be available on the website, and I hope it will make you curious about getting into all this story. Now, you might be wondering, people have been working happily with an OAA,
so why even bother with such a complex networking? So this is a talk I saw when I was at Huntsville at the Marshall Space Center from Dave Israel at the YZ conference where he was showing you've got the International Space Station. The International Space Station is flying at 400-kilometer altitude,
and it's visible, if you remember my little circles, on a radius of 1,500 kilometers. So this means you would need to put one station every 1,500 kilometers along the path. So this is what the Americans did for Gemini and Mercury. They had one ship or one station every 1,500 kilometers, but that's no problem because they only did one or two orbits. So you just needed to put a few stations along the orbit.
ISS is rotating all over the Earth. Of course, you cannot put one station every 1,500 kilometers all over the Earth. So what is happening now is ISS is completely automated. The astronauts are running experiments, but everything on the ISS is automated from ground stations, and the ISS is only visible from a radius of 1,500 kilometers. So the ISS is not directly talking to the Earth,
but it's talking through the tracking and data relay system, the TDRS satellites, the geostationary satellites. So ISS, 400 kilometers, rotating quite quickly, talks to TDRS, TDRS are talking with each other, and TDRS is sending thin and vague to Earth. Same is true for Hubble. Hubble Space Telescope is very expensive. You don't want to just run it as it is flying over your head.
You want to continuously monitor Hubble Space Telescope measurements. And of course, you don't think that the US Air Force cares much about the science in Hubble Space Telescope, but once you figure out that Hubble is just a spy satellite upside down, you might figure out why you have TDRS here in the space. So of course, here you have multiple satellites with multiple experiments.
So you need a way of packetizing your data. You need to say it's satellite number X, which is sending data from instrument number Y. And how do you do this? Well, that's where you go from QPSK, which would be your ethernet cable, all the way to the JPEG image
through all the layers of OZ layer. OK. Let's try to have fun with the OZ layers. So first of all, we need to predict where metal is flying. I still use SATRAC, despite the Y2K bug, which is still a patch and is not inserted. You can use WX to IMG. In the paper, I explain to you how you can cheat WX to IMG,
which is no longer maintained, into thinking that one of the NOAA satellites is actually Metro M2, and so that you can, so Metro M2 is actually one of the NOAA satellites. So you can use a WX to IMG. And if you have internet access, you can use the Heaven's Above website. Again, the beautiful thing about being in Spitzbergen, Arctic region, 79 degrees north,
is you've got all these passes at high elevations, which, of course, you don't get in the Western European country. Lower latitude country, you'll get one, two passes at best. OK, so we know when Metro M2 is flying. So we take our RTLS-DR. We collect the data. This is all stolen from the AirSpy website
for receiving Metro M2. Rational resampler, you have your clocking. So you cost us loop, which locks on the frequency offset between the carrier and the local oscillator. Bit recovery, and so that's a clock recovery. And at the end, you've got your soft bits.
So already, I needed to learn the first word. When I was doing this, I didn't know what a soft bit is. So soft bit is IQ coefficients where you're in one and zero are not yet saturated, but are still represented here by an 8-bit value. And you still need to identify whether it is most probably a 1 or a 0. So I didn't know what soft bit meant. So the first question is, are my data even worth
investigating? So this is a spectrum, unlike GPS. Now we have strong signals, so you see there is something happening here. Is it a QPSK signal? Well, if we expand what Paul Bovin taught us about GPS, where BPSK is collapsed by squaring the signal, well, of course, if you take the n-th power of an nPSK signal,
you collapse, again, the spectrum spreading due to the PSK modulation. So here we take our raw signal. Well, there is something, but we don't know if it's the right modulation. We square it. It is not BPSK, so we haven't collapsed the spectrum. We raise to the fourth power. It is QPSK because your spectrum spreading
has collapsed in the carrier. So we can get a signal that are worth investigating further. It seems to be QPSK modulation. So once we've done this, we know it's a packetized system. Packetized system means something will be repeated. If we look at the documentation,
you will find that all CCSDS compliant communication start with a header. Well, if you have packets, you need to know where the packet starts. And the packet start header is 1ACFFC1D. Try to remember this because I will keep on repeating this sentence all over the time. So at first, I don't know what this packet is.
I just want to know whether there is some repeated header in the signal. So as shown in the previous offer speaker, you just autocorrelate the signal. If there is some redundancy, this redundancy will show. So by autocorrelating my signal, well, indeed, I see a peak at 16 kilobytes. I see a signal at 32 kilobytes. So there is some redundancy.
Every 16,000 samples, I have some repetition. So it's definitely worth working further on this data. So the first thing that got me stuck, because again, I'm a physicist. I haven't been taught about signal communication, was convolutional encoding. So that's the topic of Martin's talk this morning.
I'm just going a slightly bit further into it, because I want to show you how it's decoded. I don't want to get into the encoding part. The encoding actually is very simple, because they show you on various documentation, as an XOR, as was shown by Martin this morning, you just take your data stream and take convolution.
So you try to mix all this data to create as much randomness as you can. So if one of these bits is corrupted, you have a lot of chance, because you've spread the information over a long duration, to recover this information. So here it's a seven-bit long shift register. You have taps from which you XOR, and you get twice as many bits on the output
as you had on the input. So this shift here will clock up, down, up, down, and sometimes take the output with one polynomial and on the other. So if you do this, you can also express this as a matrix, where time is evolving over the x-axis, and you jump first coefficient, second coefficient, first coefficient of first polynomial,
first coefficient of second polynomial, second coefficient of first polynomial, and so on. So you have your polynomial, which are interleaved, and you just shift time. So that's another way of implementing your convolution encoding. And the last way of saying it is you can do this as a state machine.
So you take the various states of your polynomial here. You input a new bit into your system. And by inputting a new bit in your system, your shift register changes. So if you had 0 and you inject a 0, you stay at 0. And your output, you run the XOR on this all the 0, you get a 0 output. If you inject a 1, well, your 0 goes to the last 0 drops.
The 1 comes here. And you run this for the XOR, you get 1, 1. So you can make this as a state machine. So once you've discovered the state machine expression, you can write this as the evolution between the various states. So previously, I had given names, A, B, C, D, to my various states. And then you can draw the state machine.
So A stays in A. A goes to B if you have a 1. So these are the input bits. So if A is fed a 0, it stays at A. If A,
when it is fed a 0, will output 0, 0. A, when it is fed a 0, A, when it is fed a 1, will output 1, 1, and so on. So you can draw your state machine. So encoding is very efficient, very easy. It's just XOR. Now, the reason I wanted to show you this is if you take the same description that we had here,
but now you take it to decode. That's a 30-second description of the Viterbian decoding algorithm. In 30 seconds, what you have here is, let's imagine I have received this bit stream. So this is what I have received. So I split what I have received into 0, 0, 0, 0, 0,
and so on. And what you see here, I start with 0, 0. OK, I get 0, 0. That is most probably state A with a 0 output. 0, 0, state A, 0, 0. 0, 0, state A, 0, 0. So these three 0's are just A looping into B, 0, 0. Now we get 1, 1. 1, 1 is a feasible output of state A
that gets us into state B. So we go into state B. When we're in state B, we get 1, 1. But that's not possible. We cannot get a 1, 1 out of state B. We can get 1, 0, 0, 1. Well, at the moment, we don't know what's the best option. So let's follow the two possible paths. We know it's wrong, but let's follow the two possible paths. After that, we got 0, 1. So we could be here, C. But C cannot have 0, 1.
It can be only 1, 1 or 0, 0. So C would create two errors. That's the wrong path. So we cut it out. And the turbine tells you, let's not follow this one. Now, we go into this path here because 0, 1 is a valid output of D that would be considered as a 0. And then you go on, and you follow your path. So if I add the output bits, here
you have the number in red of errors. Two errors means we give up on this particular branch, and we continue with the branch with only one error. And this unique error continues with a consistent path that tells you, in the transmission, this bit was erroneous. So you see that by spreading the information over a long duration, we had just a burst of 1-bit error.
That's the point of convolution encoding. It's just a noise on 1-bit. And this unique bit has been recovered by the turbine decoding. And then, indeed, we recover 1A, which is the first byte of our synchronization word. So OK, we've understood viterbi decoding.
So now, well, we can go for, so yeah, sorry, this is what I, so if you don't want to go into all the math by yourself, you have LIPFEC with you by K9Q. And LIPFEC will do the job for you. Here, I put for you a very simplified chart of running LIPFEC for viterbi decoding.
Just don't do the same mistake as I did. LIPFEC will not take as input 0 or 1. You need to feed it 0 or 255. It's working on a byte. So I struggled for a couple of weeks. Why is LIPFEC not decoding? Just because I was giving it 0 and 1, give it 0 and 255. And again, the word, encoded word, here will be decoded as 1ACFFC1D.
So we know that the encoded word of viterbi, you can use LIPFEC to encode or to decode. So this is the encoded word, and this is the decoded word. This is how you do it with LIPFEC. OK, so we've got LIPFEC. We can check that, indeed, we can decode our word. So if we have the sequence, so this
is an example that was given to me by the author of GR Satellites, by Daniel Estevez. I hope I pronounced his name correctly. You've got the FEC decoder here, as was shown by Martin this morning. And if I feed my GR decoder here with the encoded word,
indeed, I can get the output, which is 1ACFFC1D. Except sometimes I get wrong messages, because you see here that my input stream is repeating. And if I repeat my input stream, the hypothesis of viterbi is to start with a shift register that is full of 0. And this is not correct, because here,
after the first decoding, I don't have a shift register full of 0. Then I have one wrong sequence, and then I go back to 1ACFFC1D, which is the correct sequence. So you can play with this, and it's an opportunity to see how the header word of CCSDS is decoded by the FEC decoder in the radio.
What is the sequence? Well, as we did for GPS, now I should be able to correlate my received signal with a synchronization word after encoding by viterbi, and it miserably fails. You see absolutely no correlation peak. I cannot find in my QPSK signal the set of bits
of my encoding word. Why is that? Well, I associated the usual constellation to my QPSK. QPSK is four phases, so I have 90, 180, 270, 360 degrees, and 270 degrees. And I associated a symbol, a set of bits,
to each one of these symbols of these states. But why would I do that? Why would I not associate a different set of bits, pair of bits, to each symbol? So this is actually what you figure out when you read the source code of material decoder. You figure out that material decoder starts by creating rotated copies, all
the possible rotated copies, which comes back to say, let's take the standard distribution of big pairs of QPSK, and let's imagine, like in BPSK, you can have 0 or pi, or pi or 0. But in BPSK, it doesn't care, because you just go to 0, 1 or 1, 0, but you will still correlate.
Only you have an anti-correlation, instead of having a correlation. But for QPSK, you've got all these possible shift positions. So you can swap the real part, you can swap the imaginary part, or you can swap these parts here. And so if you look into material decoder, you indeed find that, well, I will not get with you, but you swap all the possible big pairs.
So 1, 1 becomes 0, 1. 1, 1 can become 0, 0. 1, 1 can become 1, 0. So you make all the possible combinations. And because you don't care about anti-correlation, these eight possible bit swaps actually become four possible combinations, because you have four ways of combining all these bits, if you think that 1, 1 and 0, 0 are the same.
So having done that, now you can see here all the possible correlations. And the only one that gives you correlation, I don't know if you can see this from the room, but you've got no correlation peak for all these cases. But here, you've got these correlation peaks every 16,000 bits. So this means this is the right assignment of each symbol
into the big pair. So now I found how to convert my QPSK signal into the encoded Viterbi keyword. And by encoded Viterbi synchronization word, I can start decoding my sentences.
So I will skip Reed-Solomon, because actually Reed-Solomon is a block encoder. So Viterbi is to eliminate random bits that have flipped during the communication. While Reed-Solomon, well, the reason I'm skipping it is because I investigated quite more deeply BCH, which is the block encoder in RDS that was investigated heavily
by Bastian. So when I worked on BCH, I've put a reference here. You can look at how BCH is working. And Reed-Solomon is just an extension of this. The only reason I'm mentioning this is you've got your data here. If you don't want block correction, which is someone is emitting, and you've got whole blocks of data that have been corrupted, well, you can get rid of Reed-Solomon.
If you want to use it, just be aware. Because again, you want to spread information over time, because you want to recover as much information as possible. You will have interleaved Reed-Solomon, meaning you have four interleaved Reed-Solomon. You have data 1, data 2, data 3, data 4, data 1, data 2, data 3, data 4. You need to deinterleave, render Reed-Solomon,
decode recovery, and then reinterleave your data. I will skip this, because only I don't have time to get the details. And again, I give you the example how to run the Reed-Solomon decoder in LIPFEC.
So if you want to give it a try by itself, here is my data set. I voluntarily corrupt four bytes. I voluntarily corrupt bytes in the data set, so in the payload, or in the correction code. And if I run this in my Reed-Solomon decoder, indeed, I detect or I LIPFEC detects four corrupted bytes.
And these four corrupted bytes are these values which can be recovered. So not only you discover which bytes are corrupted, but you can find the properly initial values of these bytes. So just demonstration, again, you have to run it by yourself. I can talk as much as I want. If you don't run it by yourself, you don't learn, so try it by yourself.
Good, so we claim to have found out how to work with Viterbi decoder. We claim to have understood how Reed-Solomon is working. So are the bits that we get out of the Viterbi decoder valid? I can claim now I have done the job. I can go away. Well, we want a picture. We don't want a random set of bits. So the first thing we can do when we look at the data sheet
is, or the documentation of a Metro M2 transmission, whose website, I'll give you the references where you can find the documentation, the last slide. You see that there is telemetry data. And these telemetry data are encoded in a sentence to recognize so that you know where your telemetry data are located. So you've got this magic sentence, 224, 168, 163, 146,
blah, blah, blah, 191. So this magic sentence tells you, I am sending a telemetry frame. So what do we do? Well, we take all our decoded bits, and we cross-correlate our decoded bits with this sequence. And this is one of these all inspiring moments where it works.
You find this sentence in all your bits. And if you decode the following bytes, hours, minutes, seconds, you find that the information was collected at 11 o'clock, 48 minutes, 33 seconds, which is indeed the output of the Metro decoder provided as a reference. So you see here that we have indeed properly decoded
the Viterbi and Reed-Solomon. Or actually, in this part, I skipped Reed-Solomon, but we have understood Viterbi because we can find the telemetry sentence, and we can decode proper information. Now, I found the telemetry, but still a bit far from pictures.
But the next part, the end now, is easy because I don't get the details. It took me a couple of months. This work has been started a bit more than a year ago. But once you've got the bits, it's just a matter of basically finding what the bytes are and checking whether they follow the standard. So I will not give the details, but indeed, you see that you've
got this header, which is always the same, that tells you, well, we're in a good job. There's an ID. Then they tell you you have a counter. Well, indeed, these three bytes, you see that they are increasing one by one. So we're on the right path. And then they tell you, here is a header. So this is an address of the first payload because the difficulty is that you've got the data packet,
and then you've got the payload packet, and there is no reason for the data packet to be synchronized on payload packets. So you might have payload lying over multiple data packets. So this is the address at which the first payload packet is starting, and so on and so on. So I will not give the details, but this is just a matter of following the protocol. So once you've got the bytes, it's really easy.
And finally, you are supposed on the payload to get JPEG images. And this is where I gave up. I said, OK, I'm not going to re-encode the whole Huffman encoder and everything. So this is where I just took this port of the decoder, the Metro M2 decoder that was ported from Pascal to C++.
And I just used their decoder. JPEG is standard bachelor level computer science signal processing training. I didn't want to write all of it again, maybe for the next training session. But I wanted to get some images, so I went. So to conclude the talk, that's what I get at first. So you are told in the standards,
again, it's all detailed in the paper that I uploaded on the FOSDEM website. You are told that your frames, your JPEG images, are 8 by 8 bit frames. These 8 by 8 bit thumbnails repeat 14 times. And each one of these 14 times 8 bit sequences repeats 14 times along one picture line.
And then you jump to the next instrument, because there are three wavelengths, which they call three instruments. You get one line of the next instrument, 8 bit wide, 1,500 bit long. And then you go to the next instrument, and so on. So here you see that I had some missing frames, so some missing thumbnails that I had to introduce. So what I did here is, because you've got a counter,
you know that when you have missing frames, here I just very stupidly did, if you're missing a frame, copy the previous frame. And this way I could feel the missing thumbnails. And here you start seeing some pattern. And here you've got one parameter in JPEG, which is called the quality coefficient, which
gives you the relation between the quantization matrix. And here is no quality, so you see that the sharp pictures here don't have the same tone as the flatter area of the picture, but you start seeing here the Alps. And finally, by applying the quality coefficient, you get an image that is a bit more smooth,
and that compares, I think, quite favorably to the reference picture that was decoded using the material decoder that you can find on the internet. So here you see Istria. You have Baraton Lake somewhere over here, I think. You've got Venice over here. So if you take the material decoder,
you get an image that is quite consistent with what we got by step-by-step decoding. So that was, of course, in 25, 27 minutes, a very fast highlight of the main steps. Go through the paper. The paper is actually about 50 pages long at the moment and increasing. But I'm trying to put every detail about the discussion
from IQ coefficient all the way to the JPEG image. CCSDS is a protocol for space communication. You might not care about weather satellites. But what Daniel Esteves is showing on his blog, and as mentioned by Paul, is this is the standard for most satellite communication, and that's going to be the future because NOAA is going to stop the analog satellites.
So if you're interested in satellite decoding, this is really worth investigating. Daniel Esteves is just addressing all the armature satellites up there. Lucas Teske is a Brazilian guy who's working on GOET satellites, geosynchronous. And his website was very inspiring. Despite, at some point, splitting the path of decoder,
his beginning was very insightful, and he helped me a lot by email. These two guys helped me a lot. This is a website that was hosting all the files about METEOR M2. Somehow, in between, in the middle of this investigation, it disappeared. I don't know where the site is. Hopefully, we've got a web archive. I don't know where this website has disappeared, but it's a fundamental repository of all the data,
some of which cannot be found anywhere else. And finally, there is one article which is not very technical, but it tells you that it could be done. And with that, I conclude my talk, and I thank you for your attention.
Next one. So just as a quick conclusion, Martin introduced during his introductory talk that we are organizing the European New Radio Day. So my frustration is that here we have two days of FOSDEM. We have one day full of talks. We never have time to discuss with each other. Everyone's running to other sessions. And I wanted to have an opportunity
to meet with people and sit together. So the way I organize this is one day, or we organize this, is one day of oral presentation, one day of tutorials. Everything is open at the moment. We are proposing some tutorial. Please feel free to propose new tutorials. Robin Gates is coming from Analog Devices
to demonstrate the Pluto. So he told me, I hope, and I trust him. It is located in France, in Besançon. Besançon is a tiny, remote city, which means that hotels are readily available. Here in the east of France, it's two and a half hour train trip from Paris. It's a few hours from Calzrou. I think it's a couple of hours from Calzrou.
The call for contribution is March 21st. Registration is free, but please register because I need to organize. I need to know how many people are coming. So registration deadline is May 1st. The website is over here. And hopefully the evening dinner will be a barbecue so that everyone can talk to each other and have more time to discuss.
So I will not waste our time with more advertisements, but please come.