Putting Cross Development Support into OBS
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Alternative Title |
| |
Title of Series | ||
Number of Parts | 70 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/39535 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 200914 / 70
3
4
6
16
17
18
19
20
21
22
23
24
25
26
27
29
30
31
32
33
35
40
41
44
46
47
48
50
51
54
55
57
58
59
60
62
65
67
68
69
70
00:00
Service (economics)BuildingView (database)Multiplication signHost Identity ProtocolPhysical systemProjective planeType theoryResultantProcess (computing)Distribution (mathematics)CuboidSoftware testingVapor barrierCategory of beingElectric generatorOpen sourceDifferent (Kate Ryan album)Factory (trading post)Descriptive statisticsOpen setSpacetimeIntegrated development environmentForcing (mathematics)Cartesian coordinate systemGoodness of fitField (computer science)EmulatorChainTheoryArmCoprocessorSoftware maintenanceOperator (mathematics)Source codeXML
08:42
Hard disk driveProgramming paradigmDistribution (mathematics)Network topologyMikroarchitekturOpen sourceBuildingImplementationServer (computing)Computer architectureService (economics)Physical systemMiniDiscDimensional analysisData storage deviceScalabilityCASE <Informatik>Cartesian coordinate systemScheduling (computing)SpacetimeStructural loadMetadataBitArithmetic meanForm (programming)File Transfer ProtocolPointer (computer programming)Power (physics)Texture mappingRight angleProcess (computing)ForestRepository (publishing)BootingLecture/Conference
14:11
Cloud computingFile Transfer ProtocolBuffer overflowProjective planeInternetworkingPhysical systemTable (information)Cartesian coordinate systemMetreDistribution (mathematics)Process (computing)SubsetVirtualizationArithmetic progressionMaxima and minimaRevision controlTask (computing)ModemTheory of relativityAdditionMixed realityWorkloadForm (programming)AreaNetwork topologyImplementationService (economics)MetadataSlide ruleAsynchronous Transfer ModeOcean currentEmulatorCache (computing)Normal (geometry)Lecture/Conference
19:40
Slide ruleCodeImplementationConnectivity (graph theory)Source codeService (economics)Physical systemBootingServer (computing)BuildingLecture/ConferenceComputer animation
20:42
Open sourceData managementWordScheduling (computing)BuildingService (economics)ResultantProcess (computing)BootingAreaTrailRevision controlTheory of relativityComputer architectureDependent and independent variablesRepository (publishing)Connectivity (graph theory)MathematicsServer (computing)EmulatorArmFront and back endsLecture/ConferenceComputer animation
23:49
ArmEmulatorService (economics)Power (physics)Online helpCalculationCoprocessorCASE <Informatik>Multiplication signOperating systemShape (magazine)Physical systemMoment (mathematics)PowerPCWeb browserType theoryMathematicsArithmetic meanFactory (trading post)BuildingAxiom of choiceDistribution (mathematics)Software testingTrailSet (mathematics)Point (geometry)Bootstrap aggregatingAreaFront and back endsMehrprozessorsystemDifferent (Kate Ryan album)WhiteboardComputer architectureRevision controlExecution unitLevel (video gaming)Sinc functionVector spaceUsabilityTheory of relativityOrbitMathematical optimizationForm (programming)Arrow of timeSpacetimeBootingResultantSelf-organizationCausalityProcess (computing)CompilerCompilation albumLecture/Conference
31:55
Type theoryAreaUsabilityMoment (mathematics)Process (computing)MathematicsElectric generatorPower (physics)Medical imagingVirtual machineEmulatorSemiconductor memoryArithmetic meanCoprocessorBuildingSocial classTask (computing)System callWhiteboardSpacetimeOnline helpRight angleFactory (trading post)NumberMultiplication signComputer architecturePressureOffice suiteThumbnailService (economics)Polar coordinate systemNeuroinformatikVirtualizationCASE <Informatik>RootFile systemCuboidPhysical systemForm (programming)WordStreaming mediaPowerPCArmSoftware bugFlash memoryCurveLatent heatCore dumpDrop (liquid)Codierung <Programmierung>VideoconferencingMaxima and minimaBootingLecture/Conference
40:01
Process (computing)Computing platformSpeech synthesisRevision controlDistribution (mathematics)CompilerMultiplication signPeripheralPlanningView (database)Single-precision floating-point formatSystem callVarianceLatent heatOnline helpOpen sourceKernel (computing)Shape (magazine)Social classConfiguration spaceLevel (video gaming)MereologySet (mathematics)Moment (mathematics)Execution unitPoint (geometry)PowerPCComputer architectureNumberEqualiser (mathematics)File systemDirection (geometry)Computer configurationCore dumpSpacetimeBitGraph coloringQuicksortVirtual machineElectronic mailing listMathematical optimizationDifferent (Kate Ryan album)Shift operatorArmCoprocessorCountingFreewareBuildingComplete metric spaceType theoryGeneric programmingInternet service providerService (economics)Factory (trading post)FlagVector space
Transcript: English(auto-generated)
00:01
Hello everybody, I hold now a talk about putting support for these nice little gadgets into build service. My name is Martin Moring and I'm one of the external contributors to the OpenSUSE build service.
00:27
First, the barriers how to join such a project. I'm from a small company that had worked with Embedded over years.
00:45
I know the Nürnberg R&D people from SUSE for a long time. And it was, I think, two years ago that they decided we want to design a new build service system.
01:02
And that was my time also to say, ok, we had also build services in place for Embedded applications and let's join forces and also redesign that and put it in.
01:21
So the barrier was that I had to do with lots of experienced people that had released lots of distributions over the years. So it was not a very community approach in that sense that there were entry barriers from your know-how to solve, so to say.
01:52
Second thing was that it started with a company open source project that is something different than a community-driven open source project.
02:07
And yeah, the third thing was we both wanted to re-engineer existing things, improve it, and put together a new generation of that kind of system that had to solve new requirements.
02:26
Ok, and the answer, how did I join then, was a pragmatic approach. Nevertheless, as in very many other projects in open source space, fill a gap and take some work that is not done and start with that.
02:50
And gain credibility so that there is trust in what you can achieve, so that you know what the others do and what they can achieve.
03:09
Get a feeling so that you get a team. The result is that I am now the package maintainer also for the development package of Ops for over a year.
03:25
And we successfully put in all kinds of cross-built support into the project. So it was a successful merger. Ok, so much for the social aspect of team building.
03:48
Ok, let's start. What kind of cross-development systems can we have or do exist?
04:04
I have also here examples for that, so that you know what I mean. Not only by theory and category, but also by some example. We started with experimenting with what I call the type 1 cross-built environment.
04:29
That means you put together something like Busybox or Buildroot, put it all in one. That was also our first experiment that we did and built your complete system in one bunch.
04:49
Ok, that is only the first approach. We tried it, but I will explain why that is not a good solution later. The next variant, and that was already successful and used in the field.
05:06
That was to implement a toolchain and modified packages in the system and put the build environment for this into the capabilities of the build environment.
05:27
That was the first practical approach where we had a real result with a real distro-build in the end. That's what I call type 2. I have two examples here, st-linux and open-embedded.
05:43
They modify packages, they write own build descriptions for every package. So this was the first approach to get something working fast, but it has usually the disadvantage of much work.
06:02
I mean, that is acceptable for 400 packages, that is the usual embedded distribution. But if you want to achieve something like OpenSUSE factory with 4000 packages, you either need many resources.
06:24
You have to keep in mind, you need to fiddle around at least one day per package. And multiplied by 4000, you can just easily calculate what that means. So that was no approach for getting OpenSUSE and existing systems in.
06:45
Then we had also said we want to have some kind of type 3. That is, use the original source somehow, don't use an emulator with as small modification in the packages as possible.
07:12
And what also turned out to be the fastest method to cope with existing distributions was what I call type 4.
07:26
Type 4, that is using emulation. Packages typically contain, for cross-build, bad things like build an executable and try if it works.
07:43
OK, if you have an x86 processor and want to do that with ARM, it results in failing. Or if you want to run at least some test suits, that's not possible.
08:00
And there's already lots of Linux distributions running around that are more or less natively compiled. Even on ARM and these kind of embedded processors. And we wanted to cope with them. So we implemented also the so-called type, what I call type 4 cross-build with an emulator.
08:38
Oops, this was the wrong one.
08:45
OK, let me summarize a little bit of what we had for requirements to solve before we started implementing. You know, build service is a feature where you can just, with adding a repository, just build your application for a new distribution.
09:09
So it's a quite orthogonal feature, what I call. You have one dimension that is the processor architecture and you have the other
09:22
dimension that is built for that distribution or for another one with different releases. And our goal was to still keep that approach and the user doesn't have to care about some internals of cross-building there.
09:47
So it should be what I call orthogonal. And the next thing is that we had to cope with existing distributions. So it was not acceptable to rebuild everything from source.
10:04
We wanted, as with current build service, keep the paradigm that you could use and reuse existing build distributions. That means Fedora, Ubuntu, whatever. We had that for PowerPC and x86, but we wanted that also for architectures where our workers are not natively working on.
10:31
Yeah. And then something more internal was when we implemented that, the execution path of existing build service shouldn't be disrupted.
10:48
That might be not something very of interest to an outsider, but for those 12
11:02
,000 users of the build service, it was interesting because, yeah, not to disrupt current service. I always come to the wrong, yeah.
11:24
Another thing is, Adrian told you already, we have a means of putting load to the server and distributing the work with the scheduler to our workers. So if you have a big setup, currently I think we have 250 nodes, right? Something like that.
11:48
Work should be distributed so that big loads of package builds can be handled. And we wanted to keep that and also the users should be able to use the local build feature also, so that had also to be enhanced.
12:06
And yeah. Then I, as a developer, with not having at hand a big disk array to
12:21
store 35 Linux distributions on my hard disk, I had the problem of how to test that. So for, and also usually the normal embedded developer doesn't buy a server
12:41
with 20 terabytes of hard disk space and 60 nodes to make embedded development. So we had to work a little bit with scalability. You have to keep in mind that a big distribution like OpenSUSE, Fedora, et cetera, they need 20 gigabytes of hard disk space per architecture.
13:11
And yeah, on the other hand we wanted compatibility, so we were forced to implement also a way to download the distributions on demand.
13:27
We have already a feature in the system to couple build services between each other, but that was not usable in this case because, I mean, those 30 distributions were not in the main system.
13:46
So we decided also to implement some form of demand download of Linux distributions that are stored in FTP trees. And to pick also the metadata of the distribution from the original FTP trees.
14:09
That feature we called download on demand. It means that Debian or RPM packages are downloaded on demand and the metadata is parsed when you create a new project from these FTP trees.
14:28
Currently we support the big three systems in this area. OK. Virtualization.
14:43
We said, OK, this is a new form of virtualization. Currently our workers use Xen for virtualization and for other processes I put QEMU just on top of.
15:04
So it's a mixed form of virtualization that is used for maximum compatibility. We've also experimented with system emulation, but that was considered too slow.
15:20
So you have to wait endlessly when you have to set up a system in the system emulation. So we had to make some kind of trade-off between compatibility and performance. So it ended up that I used user mode emulation in the crossbuilt system.
15:52
OK. Download on demand. Maybe all of you that had used build service locally already is faced with that problem, yeah?
16:06
You want some distribution to build against and need to find out, OK, how do I bring all these DVDs into my system? Adrian has to cope with that every day because people want more and more build targets to build against.
16:26
But on the other hand, I wanted to make some progress with development and not with getting bigger internet pipes to download all distributions I could take care of. So we said that we had to implement some on-demand system that does that without that the developer has to take care of it.
16:58
And, yeah, so we said implement it from the original FTP tree.
17:10
It means download on demand, caches only the needed packages. So depending on your workload, you end up with, yeah, up to this 20GB
17:25
being downloaded, but you need lots of packages to build against to achieve this. In average, if you have some hundred packages, you only need, yeah, up to 500 megs per distribution.
17:41
That is the usual subset that you have if you build an X application or GTK, KDE, whatever. It's not so much, so many packages in the distribution are never used when you build a system. Because they are in the end of the leaf and only the user that wants to run it needs it, but not for building.
18:05
That's a good thing, otherwise it would have been useless what we had designed. And what we implemented is the three metadata systems that currently exist for distributions.
18:21
That is Debian metadata that is used for all Debian and Ubuntu distributions. We implemented RPM metadata that was used for Zeus until recently, until the table overflow. And it's used with Fedora and it's usually used with normal RPM distribution.
18:50
And to run around a temporal problem, we also implemented the old Zeus attacks system in addition.
19:02
So we can handle now with download-on-demand all RPM or Debian-based package distribution also for cross-build. And yeah, as I said, it should be fire and forget and not, oh God, what version does this package have?
19:24
I missed it, where do I get it? Yeah, I will tell you also something about the implementation, what we had to change in the system. And first, there's a little overview, slides you might have already seen in Adrian's talk.
19:50
I will not explain the implementation on that one, but I will explain it with the code base where we had to change.
20:11
What you see here is usually active or components in the build service source code
20:23
or active components that run if you set up a system on the server side. Let me first explain, where is my implementation? Okay, yeah, okay. Let me explain it on that one.
20:58
The build service backend is composed of several servers that keep care of your package base,
21:13
the scheduling, dispatching of jobs, the jobs itself, and generating the build results in the end
21:25
so that you can use that with your package manager or so that you can cascade it. The source server was one of the components we didn't even touch. It's responsible
21:50
for doing the work when you check in a package in the build service. It handles the source revisions, it does the work what was explained here in the track before when you do branching and everything.
22:12
Usually there was, I think, nothing to change. I had to check, but at least it's not worth noting.
22:22
So in this area there was no changes to do for implementing cross-build. Repository server. Yeah, that implements that when you start a build that your packages get delivered,
22:48
that dependencies are calculated that when you build a package that it knows where to get its package from. So usually in that area we had to change things, especially for implementing the download on demand service.
23:09
The dispatcher was changed to handle the new architectures and we have new schedulers for the new architectures. And the workers now have to take care about that emulation needs to start when you have an ARM package that needs to be built or run in a worker.
23:59
I think I don't go into more detail here because it needs too much internal know-how.
24:12
You can ask me if you have some questions here or start with Michael Schroeder's talk on this area first as a starter.
24:23
If you want to know more details inside how that was implemented, I just want to mention that it's more in the backend that we had to change. So the web client and so on, there is only things like that the new architectures are known.
24:44
Okay, testing results. We wanted maximum compatibility to be implemented. So I put together a large testing base at the moment mostly for ARM because
25:03
Qemulator is not in shape at the moment to run this completely for all architectures. And help is welcome to change that. And for power PC that is used in embedded space, we have a solution that is faster.
25:28
We needed a starting point and since ARM is widely used and the emulator as well running, we started with ARM in this area.
25:43
And testing results. We have Debian, Ubuntu, Fedora, even Maimo was put in. And as an example for this type 2 build, we have also implemented the old ST Linux distribution, also working, running, building, everything.
26:07
Debian means Edge as well as Lenny. Ubuntu means all ports that are present running for ARM. And for power PC, Fedora is the same.
26:25
Maimo, I think there is only two versions, one for x86 and for ARM. And my colleague here, he managed to build packages for the Nokia N810 and to get C-Sync running on his calculator.
26:49
And yeah, that is also implemented. It was only a test case. It needed two days to implement. We just put the packages in and it worked.
27:03
And for ARM processors, we implemented all the processor levels that you need to run the different types of ARM cores. There was a change in the ABI of the Linux operating system in between. We had to take care also of that.
27:30
It is called the so-called ORB that was the old RB used in the Linux system in past times, for example in Debian Edge.
27:42
But today we have a new ABI that is capable of multi-threading and multi-processing in ARM. And to handle also the newer ARM cores, we had implemented also in the emulator a way to distinguish between that automatically
28:06
and so that you can automatically mix and match all the packages. The newest is, I think, ARM processor level 7. They have a vector unit that means floating point and multi-processing capability that is implemented.
28:31
And yeah, to check if that really works, we just installed a Linux distribution on an ARM board and tried out if that works, what we compiled.
28:47
And the next step is what Zonker already told you, if you were on the main track here on 14 hours, that we started now building OpenSUSE for ARM with the build service.
29:02
That's a usual choice to do so. You know that OpenSUSE is built already with build service on PowerPC and x86. And we want the same for ARM.
29:20
So we started bootstrapping OpenSUSE now with that as a little test if it works. And yeah, that test succeeded. So we have the base set of a bootstrap Linux distribution OpenSUSE factory running.
29:44
Roadmap. Yeah, the roadmap. We want to put that as fast as possible into the OpenSUSE build service in the public one, that's for sure. So that every of you can use it and build ARM packages.
30:04
I'm looking for what the reaction at Adrian is for this. Okay, let's discuss that in the questions. Yeah, download on demand is a little bit error-prone at the moment, so you need to cross-check all the time to make no errors.
30:28
So we want to improve user-friendliness of this. You can also use that not only for cross-build, that's for sure. So if you have a smaller build service and don't want big copies of all the Linux distributions, you can also use this for x86.
30:47
Yeah, then optimizations. Emulation is sometimes quite slow. And since we also implemented cross-compilation, our next step will be to optimize compile times by combining what we can do here.
31:12
Yeah, set up an ARM version of OpenSUSE. I said it already at status that we are at this point.
31:23
I mean, maybe we, if all works well, it's achievable that 11.2 could run on ARM, but it needs confirmation to do so.
31:46
We are discussing that at the moment. If that is achievable and will it be achieved. Yeah?
32:11
Okay, there is one type you didn't mention that's actually building the ARM OpenSUSE on ARM processors itself. Yeah, that works.
32:22
But you didn't mention that as a type. Okay, yeah, I didn't mention it. It works. The usual mobile phone is not that powerful to build OpenOffice with it. So if you have 60 nodes of PCs, maybe you should have a couple of hundred boards with ARM processors.
32:41
Yes, yeah, we were discussing that, yeah. It's more a question of memory needed to compile and not of processor power. I mean, you can do that with lots of them. What's the minimum amount of memory you think you need? That depends on the packages. I think factory needs one gigabyte of memory to compile all the packages.
33:06
But yeah, we could drop packages at Adrian. I think once we have to build with an emulator or with a cross-build running, building natively will be a rather trivial task.
33:24
If you have a capable machine to build it natively. So it's actually harder to make it work with cross-compiling and with emulation, I think. Yeah, it works already. Yeah, but it's actually the more interesting and the more daring task to implement it. And probably potentially more useful to lots of people because there's probably lots of unused PC processing power lying around.
33:48
Not so much, in my home there's more unused PC power than unused ARM power or unused PowerPC for example, that's for me. So it's probably a good idea to do it that way in the beginning.
34:01
Of course, if you have lots of ARM boards, you will be able to use them. Yeah, okay. Roadmap. What happens when we put that in the public service is that there's also lots of these tiny little things popping up that work 100 times.
34:30
But when you do it 20,000 times, it works one time not. So it's these tiny little bugs hiding that only pop up if you do that in a broad way in the big service.
34:43
So I expect some work here, as always. We had the same thing with the virtualization thing and it needed a while until it was really stable. And I expect the same thing here.
35:01
Work to fix the emulation in cases we could not yet even think of. Yeah. The next thing is non-ARM architectures. I said for PowerPC there was not that big pressure to implement cross-build, but it might be for other targets for embedded space.
35:28
We need that. And I've discussed that with the QEMU people already. We need some help in this area to improve the situation.
35:43
But that is mostly a QEMU issue at the moment. And imaging and these kind of things should be more suited for embedded use. I mean at the moment image generation generates DVDs, bootable USB sticks and these kind of things.
36:07
And what embedded developers think of is image types that are used in embedded areas. That means that you directly generate some form of root file system for flashes or whatever.
36:22
I mean, that is not a big issue. It should work already. It's more fiddling around with Kiwi to get that solved. And in the end, what do we want to achieve?
36:42
Assimilate with build service all these tiny little computers you have in your pockets with ARM cores and others. You have to consider per year there is a number of I think 1.2 billion ARM cores sold per year.
37:02
That's at the moment mostly mobile phones. And most of them are not running Linux at the moment, but this drastically changes at the moment. That is with ARM 5 and later there is no problem at all running Linux.
37:20
So there will be a huge demand for this new type of service also in a build service. I mean, for example, the Google phone is one of these examples. They have Linux inside. The Nokia is one of those devices.
37:44
Others, many others I expect. There is a big curve of improvement in horsepower in this area. I mean, at the moment, we have systems in embedded area that are as big as my thumb.
38:07
And they can decode HD video streams, have a 3D engine on and reach 1 GHz. So that is, it's a PC in principle.
38:20
What do we do with that? That is what I mean with lots of embedded devices to assimilate. And here comes in mind the build service, right? Okay, questions?
38:52
Yeah, maybe.
39:07
Yeah, I have, yeah. I do that, yeah. Andrew? Yeah, I take it then if you can compile on the ARM processor, would that mean that you...
39:21
I don't, I... Can you please close the door? I don't understand you. Is it possible then, once you compile on the ARM processor, then that can then be moved to any ARM processor?
39:40
Or is it a specific processor class that you can then install on? Or is it across the board, regardless of ARM 5, ARM 7? Yeah, there is already a kind of ABI in place also for ARM.
40:02
So that when you compile something, it works on all ARM cores of this class. So it's the same situation then with DX86.
40:26
If you build a Linux kernel for an ARM, you normally build it for a specific ship, because it's integrated a lot of peripherals. So how do you have any plans for supporting multiple, how are you going to support the kernel?
40:42
Are you only going to support the file system or are you also going to support the kernel? No, no, we will also provide kernel for, let's say, some generic type of kernels for the devices also. I mean, that is no problem. We do that also for other architectures.
41:05
Maybe not in that big number, because these devices differ more than in... They differ a lot more. I mean, I work at Atmel, I think for us you would need to build maybe 10 different kernels. Yeah, but that's no problem. I mean, we have 50,000 packages inside build service, so if we need 100 kernels for it,
41:30
it's more a question how to handle that from the process and who contributes that and who maintains them. But, I mean, we could solve it by that the silicon maker provides a kernel for its silicon, for example, for the class of silicon.
41:51
I mean, we should try to combine things and not make more versions of it than needed, but in principle I would opt for that kernels are provided by those who know their chip.
42:07
Okay, I think what you need to do then is that you need to specify that there are some configurations the kernel needs to be on, okay? So there needs to be recommendations. Yeah, that will be some new frontier for our kernel team to integrate
42:28
that from a single source of kernel package with, let's say, 100 variants built. Okay, thanks.
42:42
I just want to make one comment. I mean, this arm is an embedded platform and I personally would expect that plenty companies maybe want to reroll the arm distribution for their needs. So it's important that we have arm in the open source build service to show it builds for arm and it works in general,
43:01
but with the build service you can reroll, recompile the entire distributions and easily with a different compiler flag, for example, to support your particular arm shift. And that's a similar question that I had when talking about... If the door is open I don't understand you.
43:24
For example, on embedded power PC there are at least a few different processors that have slightly different options for when you build a compiler or machine options and they are not all compatible. They are usually compatible into one direction but not the other way around.
43:41
So can I handle this somehow without forking a complete new distribution by saying, okay, I want to have a power PC 405 GCC and a power PC 823 GCC or something like that and specify to use that as something like this plan?
44:01
Because basically all my, the rest of the user space, I only need a different kernel, a different compiler and then recompile everything with this compiler. Is something like this easily possible? Yeah, I would solve it with the same methods that we do it for x86 at the moment. So we optimize for a certain class of processor target with specific compiler flags but we don't use maybe the instruction set.
44:33
We have already, I've implemented at least the basic instruction levels as different schedulers so that you can, it's like x86 64
44:47
bit or 32 bit so that you can say, okay, I want to run it on a level x capable ARM core. There's, we are at seven I think at the moment and for Linux everything from four to seven counts so we have implemented three classes for it.
45:12
That's basically the old ABI and no floating point and then everything up to this level
45:21
five instruction set and everything up to the newest that is seven with vector unit and everything. So I would usually build for one of those targets with pre-optimized, I mean it's also a question of what you want to achieve with compiling with a special compiler flag.
45:42
We have benchmarked that and it usually, the generic flags help more in improving speed than if you compile for a specific processor. I just, I think you've asked more to recompile the entire solution but to replace a package for example.
46:03
And that's something what we can do at the moment but not nicely and there will be a more nice way in the future to say simply define recompile opens with a factory but use this compiler. There was another question over there. I can give an example for, I could give an example for that.
46:24
glibc, we could provide five versions of glibc that it means you don't need to recompile five times 3500 packages but it optimizes nicely already. I think we have one last question. What Jurgen said, the time is running.
46:46
I have a question, I saw the list of all the supported platforms. Will the OpenMOCO free runner also be supported? I didn't understand you, sorry. Will the OpenMOCO free runner, Neo free runner also be supported in the ARM platform?
47:04
OpenMOCO, how could that be? We have the SDK for OpenMOCO and we have implemented and running the OpenMOCO or memo?
47:21
OpenMOCO. OpenMOCO, okay. Is that Debian based? Yeah, there are a lot of distributions on it. It's also Debian based as an FSO on it. Yeah, if it's Debian based it should work. Debian packaging we can handle.
47:54
Okay, then we close the talk here. Thank you, Martin.