We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Getting Geeko some cross-compilers

00:00

Formal Metadata

Title
Getting Geeko some cross-compilers
Subtitle
An Update on Building GCC Cross-Compiler Packages for openSUSE Tumbleweed
Title of Series
Number of Parts
63
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
openSUSE relies on native compilation today, resorting to QEMU linux-user emulation for non-native build targets. Here's a brief update of where we are with building real cross-compilers, including for non-Linux targets such as microcontrollers, from our SUSE-maintained GCC packages.
13
38
Thumbnail
23:08
44
Thumbnail
25:53
BuildingCompilerCompilerSinc functionOcean currentCompilation albumComputer animation
BuildingCompilerCompilerCompilation albumArmWhiteboardPolygon meshSemiconductor memoryType theoryNeuroinformatikCalculationCoprocessorForm (programming)WebsiteProjective planeCore dumpLogicSource codeBuildingHacker (term)Right angleData miningPoint (geometry)PowerPCDistribution (mathematics)Presentation of a groupSoftware developerInstance (computer science)Scripting languageWave packetContext awarenessLinear programmingMulti-core processorComputer architectureLatent heatFlow separationRepository (publishing)Mathematics
ArmMultiplication signHard disk driveRevision controlCodeProcess (computing)CASE <Informatik>Source codeGraphics tabletWave packetCompiler
CompilerBuildingCompilerMaxima and minimaHacker (term)ArchitectureComputing platformParallel portWhiteboardArmAssembly languageMereologyPatch (Unix)Virtual machineLibrary (computing)Service (economics)Directory serviceLipschitz-StetigkeitBootstrap aggregatingPoint (geometry)CompilerHill differential equationMereologyCASE <Informatik>Level (video gaming)Binary codePhysical systemRevision controlProof theoryMoment (mathematics)Source codeTheoryComputer fileEmailBuildingPatch (Unix)Branch (computer science)Computer architectureBitLogicMenu (computing)Process (computing)Computer configurationWritingSoftware testingCodeCompilerAssembly languageCalculationGcc <Compiler>Device driverSinc functionNumberOnline helpChainReal-time operating systemMicrocontrollerSoftwareUtility softwareSoftware developerStandard deviationComputer animation
ArchitecturePatch (Unix)CompilerMereologyAssembly languageBranch (computer science)Computer fileWhiteboardArmEmailOpen sourceMicrocontrollerBinary codeInstance (computer science)Library (computing)TheoryConnectivity (graph theory)1 (number)Set (mathematics)Scripting languageLink (knot theory)Condition numberLocal ringUniform resource locatorSoftware testingRepository (publishing)Computer filePhysical systemMass storageInstallable File SystemEmulatorBootstrap aggregatingCompilerCodeWhiteboardComputer architectureKernel (computing)Slide ruleMoment (mathematics)Binary fileMereologyGraph coloringRevision controlArmCompilation albumFactory (trading post)CompilerChainLipschitz-StetigkeitWeb 2.0Musical ensembleData storage deviceComputer animation
Lattice (order)Assembly languageEmailBinary codeLibrary (computing)Physical systemRevision controlMereologyInstallable File SystemServer (computing)Suite (music)SubsetLipschitz-StetigkeitOrder (biology)Repository (publishing)Bootstrap aggregatingSoftware repositoryComputer fileMultiplication signRandomizationCASE <Informatik>Set (mathematics)BitGcc <Compiler>CompilerRootEmailRoutingFile systemInternet forumStandard deviationUniform resource locatorLinear programmingSource codeKernel (computing)Compilation albumEllipseMacro (computer science)Axiom of choiceBuildingPoint (geometry)Right angleComputer animation
Slide ruleRight angleNumber
Branch (computer science)Computer fileWhiteboardArmSource codeBootstrap aggregatingCompiler1 (number)WindowComputer fileServer (computing)CodeFlow separationLevel (video gaming)Service (economics)Computer animation
Assembly languageKernel (computing)EmailSource codeBuildingCompilerArchitecturePatch (Unix)Lipschitz-StetigkeitBootstrap aggregatingWindowComputer animation
Assembly languageCompilerMereologyCuboidNatural numberWave packetRoutingSoftware developerChainRandomizationConnectivity (graph theory)Revision controlCombinational logicBuildingProjective planeArmComputer architectureCompilerLibrary (computing)Formal languageMultiplication signDivisorWindowSource codeConnected spaceSoftwareLatent heatInternetworkingDefault (computer science)Factory (trading post)Uniform resource locatorTrailSet (mathematics)CompilerInverter (logic gate)Compilation albumLipschitz-StetigkeitUtility softwareBinary fileRootWordComputer animation
Computer animation
Transcript: English(auto-generated)
Let's start. My name is Andreas Verba, still, and next I'm going to talk about cross-compilers for OpenSoup's Tumbleweed and the current status of that, since in particular there's been
recent developments here. This is intended as a very quick lightning talk only. The GCC packages that we use for the OpenSoup's distribution, in particular in that context-native builds, are being developed in the devel.gcc project by the toolchain
team from Zuza, mainly. And then there's also some cross-compilers in a specific cross-toolchain repository. There's several of them, like AVR, recently someone started working on Extensa. There's MSP430 and some other one that I can't think of right now.
At some point Hack Week 10, a colleague of mine, Richard Bina, looked into building cross-compilers for the OpenSoup's distribution. So not just that are available on the OpenSoup's distribution, but for the architectures that
OpenSoup uses. So that means for x86, PowerPC, ARM, maybe also S290. And those packages were using the glibc that was already being built on the native workers
via aggregation. And what I have now been looking into is building pure cross-compilers that do not require to have an OpenSoup already building natively, such as in the previous presentation for MIPS. Also the one that originally inspired me to look into this was one of the ARM boards
that I was enabling, both for OpenSoup and upstream. The parallel board not just has a dual-core ARM processor on it, it's also got a programmable
logic, which then is used to connect to the ARM cores an epiphany coprocessor from Adaptiva. And this was being used kind of as a very low-memory mesh computing type of mathematical
coprocessor for speeding up certain calculations. And of course, for many things, you can download cross-compilers from external websites, be it in source code form, with scripts how to do that, or binary, like for instance
you can get ARM cross-compiler toolchains either from Linaro or from some ARM launchpad site. Sometimes, you will also be a reference to some code sorcery toolchains that are available, or in this case there were some being provided by Adaptiva themselves, but I was interested
in having not just a binary download that I put onto my hard disk and then bitruts and doesn't get any updates, but having a cross-compiler toolchain that actually can draw from our package updating processes in being something that I can just zipper up
and have a newer version with the latest fixes. So how does, if we're looking at this particular case, oh, I'm very sorry, you should have seen the slide and for some reason this is never working the first time. This is what I was talking about here, now you see it, very sorry for that.
So, in this particular case, there is no Linux running on this chip, it's just code that
the users would write themselves or that they would download from somewhere and then deploy to this chip for doing particular calculations, often like an API for those operations and as such they're not using the glibc that we're using, but Neulib. And so far we did not have Neulib support in our GCC compiler packages and I set out
to change that. What does this look like, package-wise? We would have a cross epiphany binutils package, which contains the assembler and
various tools, that's kind of the easiest part, pretty much anyone can just branch the binutils package, define a new name, maybe have some bit of if logic inside the spec file and then usually if it's in the upstream binutils package it will just build and be available, but who wants to write assembler code all day these days?
Now usually you would just use the binutils package, build your C library from there and then build a compiler package, but in this case what I was doing is I was inserting
a special bootstrap stage, that is this variable name here, GCC libc bootstrap, whenever this is set it means that not the full GCC is being built but only the host parts of GCC, so GCC consists of like the GCC binary but it also comes with things
like libgcc and some other things that are being built for the target system and that in turn depend on having a standard C library available.
Then with the host only compiler, which by the way when you would build that just on your local machine you can just switch directories and then continue the build where you left. Here in the build service I added a separate package that I simply gave this suffix dash
bootstrap, this could then be used as the build dependency of a cross dash epiphany newlib develop package for building the newlib library and then once we have the newlib library built and packaged we can use that as build dependency again in case that the
GCC libc bootstrap is not set to build both the host and the target parts of the cross compiler. Since, okay so originally I tried that for the following toolchains, not just epiphany
but I also tried this for the Renaissance RX and the RL78 architectures. Originally I was using GCC49 then I switched over to GCC5 and most recently with some
help from Anders Corporation I also tried a NDS32LE build as the first one with GCC6. Also there are a number of cross compiler toolchains which are not yet entirely upstream excepted in both the utils and GCC and one of those is the gnu-pru toolchain for the
don't have one here unfortunately for the BeagleBone black and that's like a also a microcontroller for like real-time network processing and things on there that with an appropriate Linux kernel drivers can have some code loaded to and then use that
for whatever processing you want. In that case I simply branched the packages as a proof of concept taking the particular git commit that the patches were based upon using that as a tarball instead of the one that we were usually packaging in DBL GCC and putting those patches on top just to
build that. I think at the moment it's broken but it should be relatively easily to get back to work again. I think it was a new lib update that I did that broke some things there. Maybe one or two weeks ago the new lib package got accepted into Tumbleweed.
For now this is just a very basic package. It's only the sources to actually use for building or deriving cross and compiled versions of new lib from. In theory it would be possible to build a 32-bit x86 shared version of new lib.
Unfortunately I have failed to actually get that to build with the headers that we were using for OpenSUSE. If anyone knows more about that particular topic they're very welcome to make that work as a proof of concept.
But for now the main point of that package is to have it as a starting point for having linked spec files. The GCC 5 package has successively been enhanced to have the necessary ifs for the
dependencies and conditionals for which parts are going to be built and installed. GCC 6 seemed to be missing a few of the preparations that we had gotten into the GCC 5 packet that was in particular for which architectures to actually go into the Mueller path but
that's fairly minor and that underscore build actually succeeded once I got those in so I'm hoping that things are okay there. One thing to watch out is that just because something built does not mean that it works.
So it might be that some binaries are packaged in the wrong location and this will only then be noticed once we actually have a package or a local test script that is using the compiler to compile a particular C file with certain include dependencies or something
to trigger the failure. What's next for now? It would be very nice if we could get some of the stuff, not necessarily all of what I've been playing with into the official DBL-GCC repository as next step.
So basically use the base system NULIP package and link it into DBL-GCC with appropriate either as NULIP or directly as a cross-compiled version and add the necessary architectures to the pre-check-in script to have NULIP libraries for particular architectures.
I believe we already have binutils available for Epiphany for instance. And then also beyond NULIP, of course that's not the only way that cross-compilers can
be built, there is both UCLipsy, I have or at least I used to have a cross-compiler toolchain in home of use of Linux that was using use of UCLipsy for a new MMU arm build which was formerly known as UCLinux and also I've been using a very similar concept
for the G-Lipsy-based open-source MIPS port that I was talking about earlier. The question here is in theory we could go along and check which architectures are actually
available in binutils, which ones are available in GCC and simply enable all of them. However since GCC is a very core component of our architecture, the more stuff that we don't need we enable, the more likely the risk for breakage that would interfere with
submissions to factories. So we'll have to have a discussion about what the same set of targets is and for instance since in open-source we do not have tools to actually transfer RL 78 binaries
to some microcontroller board for me that would be the first candidate not to put into MDVal GCC initially whereas for the RX architecture there's like the Sakura board that's a small microcontroller board in a very pink color that you can simply connect to your PC as
a USB mass storage device, copy the file onto there and then run it via pressing a button or not even that. So it's relatively easy to actually use from our Linux system so I guess that would be one of the candidates for actually having in there.
And one topic that I have not yet looked into myself is that there's obviously not only the compiler to compile code but once you actually have code compiled and deployed to some target or running under emulation then you might actually want to debug it. So I have not yet looked into having cross versions of the GDP built at the moment.
If anyone is interested or has looked into that before that might also be interesting to attendees. Now as a final slide I have a brief overview. If you remember this slide here, sorry here, then we were seeing that obviously we have
here it is. We have the bin utils as the base tools that we need for building any target code and then we had a bootstrap package and then the full package with the library package
in between. And if we compare that to GLLipsy it gets slightly more complicated so as a first step we need to package the Linux kernel headers. Then for building GLLipsy we need just the same as before we need a bootstrap variant
of the GCC compiler in order to next build GLLipsy.
Then a GLLipsy DVEL bootstrap package which then builds the, or packages the GLLipsy headers and an interim version of Lipsy and that one we can then use in a second intermediate
GCC build for actually building the GLLipsy with this interim GLLipsy and then which is kind of just a stub set of binaries that are there to please the build system.
And then we can actually, once we've built LIP GCC as part of this mini GCC5 which cannot yet build much more, we then build the final version of GLLipsy and once we have GLLipsy finally built we can then build the final GCC cross compiler. You will note that I'm using GCC5 here, that's simply because in the OBS server that
I've been using for MIPS I still had this based on GCC5 and for GCC6 we may need to be doing a few additional ifs there and in particular one issue to keep in mind
there when building those cross compilers is that we sometimes have a choice or kind of a conflict between installing target files into the sysroot which is the USR and
then the triple name slash sysroot for us or installing it into the root file system. So we've kind of I think taken a quite hacky intermediate approach where we are installing
target binaries into the sysroot where we definitely need them for using them for cross builds but also having some things like the LIP standard C++ headers be reused from the native GCC5 package which means that we cannot have the cross GCC5 package
install the same host files as the cross package so we need to not just install them to a different location but we then need to also move some files around but not others and delete some other files in order to make this file so this is a little fiddly
and was I guess the main reason why this was for quite a long time not yet really being attacked. For use ellipse it works quite similar although I admit that the current status that I have in my home repository and the use ellipse repository is quite hackish it was
using a random set of kernel headers that I actually generated on my local machine whereas in this case as indicated above here I was using some a bit of macro magic
and tools in order to just get that from the latest version of the kernel source package available in that repository so basically we're using the hosts kernel headers in order to generate the target headers whether that is a good idea is up for discussion which is the
next point of topic right now any questions comments or suggestions to the right in the front a microphone please if you could recall slide number five for a second play if you could
recall slide number five for a second please so I'm answering the question which of all the possible targets to use this one there's a shameless plug I want to put in here and since about for at least two years and me included we have been somewhat working
on the windows target which is where I think you can draw quite some inspiration for spec files and cross compiling with and without gzipc so that's one of the things
to bring here so I admit that I have been using that also for cross compiling some packages my understanding was that yesterday it was like also most of the other stuff in cross compiler sorry cross toolchain and it was using totally separate source code
packages is that correct it uses it uses one so for example the gcc uses one build service package which contains three spec files for the three stages of gcc grief but it's separate gcc packages not the ones directly linked or reused from
develop gcc that was my question yes they are separate okay and this is why they also look rather clean compared to this is a gcc okay so you have one bootstrap step and that's sufficient for four windows no there are free free steps three steps so in whole
three or three intermediate and then a final just the way you okay same way as here okay yeah so this one yeah yeah I think the one with new lip in fact well there's only two then
there's the bootstrap and the full one yeah okay in for the for the for the windows target we use the full cross compiler to build itself for the target architecture so that we get the lips did see plus plus dll6 because you don't get that from the cross compiler actually you can possibly I believe so so it's I think by default it's so there's a setting in
the no I think for for GCC it's not in the pre check-in it's in chain spec so I think there is a variable in the top which determines which languages are being built and for one it
and not every architecture actually can build the c++ compiler of course but obviously for four windows and x86 architecture that should not be the limiting factor but it's it's probably just the setting that could be enabled so it might be possible to to maybe merge the two then but I since I'm not a tool train developer myself and just you know a random contributor
kind of I can't make any promises as to what actually gets accepted into develop GCC and I did not spot rich see or Mattias nots or anyone from anyone else from the tool chain team here to actually comment on that further comments or questions okay okay so like a year and a half
ago I was also tying around with cross compilers for some arm and also mips words I have and back then I tried cross tool ng to get the tool chain and it was quite a mess because you had to try
different versions of all the compil components of the tool chain to get it to run is that still like that so do you still need to try different combinations so yes by by nature of our OBS and our packaging guidelines or our policies tools like cross tool ng or there's others like there
I think you can also use something like not not busy box what's for the same project build root I think you can also use build root to build cross compiler tool chains for for certain architectures they all either assume that they have network connectivity to download packages
from the internet or they specifically rely on having like the source code tarballs in specific locations and what I've seen for for extensa in in cross tool chain that simply violates the
philosophy of the opens as a factory where you have one source code tarball per package and not you know like bin utils gcc libraries and gcc and whatnot all thrown into one one source package alongside the the tool thanks okay thank you I think we have to to cut off
now we are already five minutes over time thank you very much for your time and let's hope that we can find a way to get this working into tumbleweed fairly soon thank you