We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

LLVM and GCC

00:00

Formal Metadata

Title
LLVM and GCC
Subtitle
Learning to work together
Title of Series
Number of Parts
490
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
At the GNU Tools Cauldron we held a panel discussion on how GCC and LLVM can work together. The video of that discussion can be seen at https://www.youtube.com/watch?v=PnbJOSZXynA. We proposed a similar discussion to be held at the LLVM Developers Meeting, but the reviewers suggested that such a discussion would be better held as part of the FOSDEM LLVM Devroom, since that was more likely to attract GNU developers as well. We wish to explore how Clang/LLVM and the GCC can work together effectively. The participants will explore opportunities for co-operation between the projects. Areas to be covered include: collaboration on issues related to language standards, changes to existing standards or implementing new ones; maintaining ABI compatibility between the compilers; interoperability between tools and libraries e.g. building with clang and libstdc++ or building with gcc and linking with lld; and communication channels for developers via bugzilla or mailing lists. The compilers are part of wider projects providing all the components of the tool chain, and we anticipate the discussion will roam to low level utilities, source code debuggers and libraries as well. We hope the output of the discussion will inform future work between the two communities. The panelists are Arnaud de Grandmaison is a Director of the Linux Foundation, currently working at Arm. He has been developing with LLVM for his last 10+ years, to support custom architecture or enable architecture exploration. Pedro Alves is a global mainainer and major contributor to the GNU Debugger (GDB) Tom Tromey is a long-time GNU maintainer. He wrote Automake, worked on gcj and Classpath, and now is a GDB maintainer.
BitSpectrum (functional analysis)Point (geometry)View (database)Link (knot theory)Multiplication signVideoconferencingProjective planeComputer animation
Row (database)Lecture/Conference
Lattice (order)Set (mathematics)Focus (optics)DebuggerInformationStandard deviationFormal languagePosition operatorSource codeBasis <Mathematik>QuicksortExtension (kinesiology)AreaCompilerRepresentation (politics)WordChainSoftware frameworkSoftware testingArmOcean currentSoftware maintenanceState of matterOpen setSoftware bugMultiplication signCompilation albumBitTelecommunicationMilitary baseCore dumpFood energyFerry CorstenObservational studyPoint (geometry)Open sourceFunction (mathematics)Group actionSpeech synthesisLink (knot theory)Cellular automatonLecture/Conference
View (database)Cursor (computers)Video gameGoodness of fitLibrary (computing)Point (geometry)Term (mathematics)Internet forumSoftware testingOpen setInterface (computing)Standard deviationChainElectronic mailing listRootIntelMatrix (mathematics)CompilerData structureHacker (term)CodeFood energyRemote procedure callCommunications protocolAreaEmail1 (number)Projective planeSoftware maintenanceMereologyExtension (kinesiology)Commitment schemeTelecommunicationRun time (program lifecycle phase)Process (computing)Patch (Unix)Mixed realityMatching (graph theory)Basis <Mathematik>Suite (music)Touch typingTangentLinear regressionInstance (computer science)Debugger2 (number)Lecture/Conference
Interface (computing)CompilerBinary codeCartesian coordinate systemExtension (kinesiology)Point (geometry)DivergenceCodeDifferent (Kate Ryan album)Web 2.0QuicksortCASE <Informatik>Web browserDescriptive statisticsFormal languageKernel (computing)TelecommunicationChainStandard deviationMereologyoutputProjective planeImplementationContext awarenessAdditionFront and back endsBitMathematicsProcess (computing)Right angleReading (process)Goodness of fitView (database)State of matterWebsiteGreatest elementCausalityOnline helpSoftware testingReverse engineeringLecture/Conference
Different (Kate Ryan album)ImplementationRevision controlTrailPoint (geometry)AreaView (database)Video gameMathematicsSoftware1 (number)Multiplication signCASE <Informatik>Software testingLink (knot theory)Goodness of fitExtension (kinesiology)Source codeComputer programmingLibrary (computing)Error messageRational numberInclusion mapAxiom of choiceCompilation albumCollaborationismType theoryFormal languageCompilerProjective planeRule of inferenceQuicksortLatent heatFlagBitData storage deviceOperator (mathematics)Expert systemScripting languageArithmetic meanCondition numberTask (computing)Run time (program lifecycle phase)Speech synthesisStructural loadSuite (music)Symbol tableBinary codeCodeSubsetComputing platformLinear regressionDeclarative programmingInformation securityMereologySystem callLecture/Conference
Direction (geometry)CompilerCASE <Informatik>Electronic mailing listBitDampingFlagCompilation albumImplementationMetadataInformation securityComputer fileMultiplication signRow (database)Revision controlBinary codeObject (grammar)Form (programming)AreaGroup actionFunction (mathematics)MathematicsStandard deviationSelf-organizationInterface (computing)Metropolitan area networkDifferent (Kate Ryan album)Projective planeEmailRight angleCommitment schemeChainProcess (computing)Axiom of choiceTraffic reportingBit rateWeightMoment (mathematics)Incidence algebraInterpreter (computing)Software testingPoint (geometry)Phase transitionReflection (mathematics)Lecture/Conference
CASE <Informatik>Revision controlProduct (business)Flock (web browser)Projective planeCollaborationismArithmetic progressionLink (knot theory)Point (geometry)BuildingMultiplication signDistribution (mathematics)Linker (computing)Basis <Mathematik>Lecture/Conference
Software developerCompilerMultiplication signOnline helpProjective planeProcess (computing)40 (number)Commitment schemeCASE <Informatik>Interactive televisionAreaFeedbackRight angleGoodness of fitEmailLinker (computing)Lecture/Conference
Open sourceFacebookPoint cloud
Transcript: English(auto-generated)
Ladies and gentlemen, I'm Jeremy Bennett. This is a bit of an experimental first for the LLVM REM. It's a panel session, and we have one throat microphone. It's intended to stimulate a discussion.
Quick show of hands, how many people here have worked or are working on LLVM? And I'd include in that anything in the LLVM toolchain. And how many people here are working on GCC or anything in the GNU toolchain? How many people here have dipped into or are dipping into both?
So actually there's a number. The point is there are quite a few people who work or have worked on both projects, and it's possibly time we try to work out how to work better together.
We had a first session at this at the GNU tools cauldron. And you can go and watch the videos online. The link is on the abstract for this. But we wanted to go into a primarily LLVM audience and the LLVM Foundation suggested that FOSDEM was where you were going to find the broadest spectrum of views.
Unlike, not in California, and so that's what we've got. And we have three panelists to lead the discussion. Arnaud de Grandmaison, Pedro Alves, and Tom Tromme. I'm going to let each of them briefly introduce themselves, and then what we're looking for is a discussion about how we can work together, and in particular afterwards
we'll look at the recording and see if there are some concrete ideas of things we can actually take away and do rather than having a nice comfortable chat about them and then nothing happens. So here's the aid memoir, and we can cover anything else. It's not just GCC and Clang LLVM.
It's the whole tool chains LLDB, LLD, GDB, binutils, and so forth. So without further ado, let me ask each of the panelists to just say a couple of words about themselves. Hi, my name is Tom Tromme. I work at AdaCore, primarily on GDB.
But in the past I've worked on GCC, and I also worked on Rust for a while, so I've worked on LLVM and LLDB as well. Hi, my name is Pedro Alves, and I work at Red Hat on the debug team, and I
have been working on GDB for a while, and I'm a GDB maintainer and contributor. And I'm Arnaud de Grandmaison. I'm working at ARM. I spent a long time in the in the compilation team at ARM working on LLVM, and before joining ARM,
I was in startups doing on custom DSP, working on a custom DSP and a custom processor, and I was in charge of the tool chain, which was obviously LLVM based.
Okay, just goes to the camera. Okay. Okay, so this is intended to be a session where everyone participates. The three at the front are to stimulate discussion. I've put up, on the basis of the meeting we had at the GNU Cauldron, some areas that I think
matter and where we could work close together. Language standardization, ABI compatibility, interoperability between the tool chains, and also channels of communication. How do we talk together? Do we need different conferences and so forth? What I'd like is each of the panelists now to
give their first thought on the one most important thing we could do to improve cooperation. I should have expected to be on the spot.
Yeah, I'm very bad at planning. My primary focus is on debugging, so that's what I know the most about. And when I look at the debugging world, I think it's actually in a worse state than like the sort of a user language world, you know, like writing C or C++, and I think that one
thing that would be very good to do is to have GCC and LLVM sort of explicitly cooperate on improving the DWARF standard and documenting and sort of standardizing the extensions that they both use. So I think that would be a very fruitful area of cooperation.
Wow, you've said everything. I come from a debug background as well and actually my work on the compiler side is not very meaningful.
Looking at the current state and current, like, duopoly in the debugger land in open source, I see two aspects that would be nice to improve. One of them is an area of focus right now for our group. It's on the quality of the bug information.
We've been looking at the quality of the DWARF that compilers emit. Does it represent the, you know, the original source accurately? Does it support the whole set of DWARF, the whole features, which compiler
emits correct output, which one doesn't, which one needs to be fixed? It would be nice if we joined efforts in the testing side, in, you know, the testing frameworks that validate the quality of the bug information. I know that there have been efforts on the LLVM side about this.
It would be nice to chat about it. And that was the second point, which I forgot right now. It will come back, sorry. Okay, so it's a bit hard to be in the third position there.
But maybe to differentiate a bit, but I think it tightly links to what you've said and I think, well, we could probably better cooperate as communities is around the language standards. My understanding of the language standards, or the standards, because if we consider DWARF as a language or not, I don't know, but
the thing is, my impression is that mostly in the standards committees, we have companies are represented and whether this is LLVM or GCC, this is more tools for experimenting with the new standards.
But this is not representing the whole GCC or the whole LLVM community as a whole. I don't know if I'm right there. But I don't think we have LLVM only representative or GCC only representative at those standards.
On kind of a tangent that touches standards, again from the debug side, because that's my expertise, for example, there's been a push on LLVM for OpenMP support, I think, and yeah, I'm sure. And for debugging OpenMP, there's a standard called OMPD, which is OpenMP debugging,
and that requires implementing a library that exposes a standard interface and what's happening right now is that LLVM is implementing that library and GNU side also needs to implement something like that, exposing the same library and other tool chains like the Intel compiler
also is doing the same, and we have a mixed match of matrix of debuggers cross runtimes like GDB against the LLVM runtime, GDB against OMP from the GNU side, and LLDB against
GC and, you know, the matrix. It would be nice to see about sharing that infrastructure, maybe sharing even the code, and that can only work if we cooperate and discuss and experiment together. And the other point which I have forgotten earlier came back to me. It's, I wanted to mention like the
LLDB uses the remote protocol from GDB, and it has its own extensions, and it would be nice to cooperate in the sense of the ones that are generically usable would be standardized and documented in a single place.
And the longer we take to get to that point, the more we will end up diverging, and we don't want to end up in a place where tools end up more incompatible. Sorry about talking only about debug stuff.
Okay, thank you. At this point, I'd like to throw out the same question to the assembled multitude. If you'd like to put your hand up, so we catch it on tape, if you make your point, I will try and summarize it and repeat it. So from the audience, anyone here have a view on
the most important areas where we could improve cooperation between GCC and LLVM? I think a neutral, somewhere of a neutral forum for discussion would help, in that you've got a set, basically
LLVM mailing list, you have GCC mailing list, you can be known in one and not the other, and there's sometimes intimidating to cross the gap. So let's say more places where you can sort of tread on a more neutral place. Yeah, if you just say, I'll repeat it for you.
So the suggestion is that GCC's big regression test suite could be brought together and have it as a test suite for LLVM.
I have to say that professionally we've been doing that for 10 years. So it's there, but it will be good to formalize that so everyone can pick it up. They don't have to come and get our magic copy of it. Any other comments? Excellent, so we now have some of those there. So that's great news, the torture tests are there. Which are the ones that cause pain?
Well, no, let's talk about that for a second. I'm really curious about this. Like, like I know GCC incorporates some things from LLVM, sanitizers or whatever, and those are just imported periodically, and maybe there's local hacks, I don't really know.
On what basis are the GCC torture tests integrated into LLVM? Do patches go back to GCC? You know, is that, because I feel like this is, this is an important thing where I feel like, like for instance this thing about cooperating on DWARF, which is
territory I'm really comfortable with, right? Like there, part of the process has to be like a social commitment by the maintainers to say if you're going to extend the DWARF, you have to follow their agreed upon thing. You have to upstream it or document it or whatever, you know, but that requires like a commitment from both sides.
But, um, you know, I'm concerned like just hearing that and if you don't know the answer, like is it a fork of the GCC test suite that to me seems like worse somehow, you know? It's a bad outcome.
I believe it's actually, it is a fork and it works by actually having a blacklist of tests that won't run. That's been done once before and of course then people end up running a 10-year out-of-date test suite. So I think there are two good points there. One, it's good that the cooperation is happening, but the second is actually if it just leads to another fork, then it's possibly not long-term valuable. More comments from the
assembled multitude? Yeah, I think one thing that that works well that I see occur sometimes between projects is just practicing careful communication and respect kind of thing. It's very common
for people, like I tried to dissuade people from saying, oh, why was this implemented this way or not that way kind of thing and generally be respectful of the competition I think helps in all discussions on either sides of things. One thing that I wish we had more of was more of
like an RFC proposal would be interesting for certain language extensions that like right now either side will ship extensions without input from the other and not necessarily take these to the ISO standards bodies, and I think that's okay, but then
typically multiple implementations will work out different kinks or interesting edge cases or things don't compose a certain way with other features and these typically only kind of shake out once you get more than one implementation.
So the comment being made is that if you have divergence of extensions as part of the language exploration, then people writing application code
have a nightmare of making sure it works on both GCC and LRVM. Just about the documentation thing. I think one issue there is, you know, if you work on GCC, you're familiar with GCC, and if you work on Clang, you're familiar with Clang. There may be no one who can write that document
who understands the subtleties of the divergence in some particular feature, you know. But I really liked your point, and it really reminds me actually of like what happens in the web world where, you know, different browsers collaborate and features sort of don't become web standards until they're implemented in multiple browsers, and
they have, you know, like you said, more or less an RFC process, and I think that would be an excellent idea. So on the RFC process, maybe one thought I have of where it might get hard to implement it
is that the more people you involve into a discussion, the longer it takes, and it seems sometimes... I've seen in the past incompatibilities between two tool chains appear, because in one community there's a quick need to implement something would be relatively straightforward, and in the other community, they don't see the need
as quickly, and so one community moves ahead because there's a genuine need for it, and the other community doesn't react as quickly, so that might be something to overcome, maybe not necessarily completely impossible. From my point of view, I would say my personal experience is I find most of the difficulties to come from probably the ABI side in that
whenever a new feature gets implemented quite often without realizing there's some small binary interface all of a sudden that gets created, and it sets a defective standard, and it doesn't get documented. Maybe that's because I work more on back-end side stuff, and it's just
it seems almost probably every week, at least every month, some small extra addition happens to a binary interface somewhere, and I would say the majority of binary interfaces go completely undocumented, and so I'm starting to wonder would it make a difference if we tried to recognize, oh, there's a new binary interface. That's at least just documented,
so it becomes more visible, and it's not just three people who implemented it, and you have to reverse engineer from code. Would that be helpful? I have the impression that just documenting it would not solve it,
because because when you're it's only when the other team goes and tries to implement it, even based on documentation, that they realize, oh, this kernel case wasn't considered, and it's only them that will notice it, because the original team just implicitly thought it worked that way, didn't even think it would be a possible design point change.
So it feels to me like a lot of this is going to be based around communication and just reaching out and and being friends, making bridges, and break away from that mindset of them versus us. We're all just
toolchain people who work on the same kinds of problems, and we should, you know, I'm adding this extension for the C language. We should know I should reach out to that friend on the other side and see what he thinks about it, and hash it out a little bit at least, in public in preference,
and make that a way to shake out some things before they are done and documented it, and then what frequently happens is it's released and shipped and out, and only released later, is the other side aware of, oh, there's an extension for that, let's try it, and then it's already too late,
because it's already in the wild. So the point is, you know, just reaching out. And to build on the point you raised earlier on, and also on what Christophe was saying with the fact that the two
compiler teams may not be working with exactly the same agenda for different reasons. I think what would really be important apart from having some documentation for the ABI would also be to have some tests, because just the description of the ABI is often not enough.
ABI can be quite subtle, and the corner cases are often not described in the specification, or why you are doing things this way, and so there you really need tests. That's an excellent point, and it's something that you can do in your own team without reaching outside, right? And it feels
to me like that's just a good engineering, right? If you're not doing that, you're doing it wrong. Okay, we've got a question at the back. Patiently waiting.
So the question was the small device. It's a very good question. So Microsoft have
open-sourced their library last year. Yes, I mean, I think the whole point is to bring in all voices. Microsoft is changing. They were at the GNU tools cauldron this year, which I think is a first, and a very positive and big support. They are changing as a company,
and yeah, of course. The whole point about free and open is it's inclusive. That's a great point. Yeah, two questions here. So that's somehow related to specification and testing, but writing tests is not the most tremendous task or the most enjoyable task,
and whenever we develop a new security feature or a new optimization, we could write the tests as a whole community and in a generic way, and then everybody could implement it and take advantage of the test suite. That's some area where both from an engineering point of view
and from a time spent point of view, collaboration is fruitful and not very difficult to to set up. I wish that for Fortify so there would be a test suite that I could use. Okay, that's a good part. From my personal experience, I know the GNU regression test suite
is based on Deja GNU features that say, is this feature supported? And if you write those generally above, you can give them GCC or LLVM, and it'll work out whether the test can run. The problem is the GCC test suite is huge and very old, and only the best and newest tests actually work like that. So the real world hits a bit there.
I have a similar question. He made a talk about changing the compiler from GCC to Clang, and I saw many projects that try to standardize, don't use GCC specific stuff, and I think it
would be great as a user to have more interoperability, so I can just change my compiler without having to think much, and I think it goes currently the other way around. Some projects just require to use Clang, and if I have a platform that isn't supported by Clang, I have no chance to do anything, and yeah, it would be great as a user.
Here, I'd like to say that on one hand, interoperability is a nice goal to have. The other thing, I mean, if you try to do embedded software, it's often a good idea to test
it with multiple compilers because they will make different implementation choices, and you need to accept that what you have written is not portable. So I mean, that's the point of interoperability. Just a subset will be interoperable. So that's, I mean, it goes both ways. I'd like to ask the panel what they think about
deprecation as such, or at least not deprecation, I'm talking re-implementation. So like it or not, all the calls for collaboration, there will be features that get pushed ahead further in one than the other, and then the other will try and catch up as such. Is it sort
of in some ways akin to code review, you do something downstream, you then move it upstream, then it gets code reviewed, it gets changed. And is there a way of deploying newer features as maybe some kind of tentative extensions, or is it anything that you put in binary for life?
I guess is there rule for, say, communities to introduce things as sort of, okay, you can use this, but it's not standardized yet, whatever, it may change in the future, that type of thing. That's a good question. I think we ought to ask the panel, and I'll start with Arno this time.
So I'm not sure I understood fully your question, but so around for the deprecation, it's a bit hard, because as long as you have one user who does not, who still refuses to update,
upgrade his code base, you cannot fully deprecate the stuff, or he will have to stick with an old version of the tools. I'm far from an expert on the compiler side and the runtime side,
but there are some things that have been deprecated, like I'm thinking, for example, on the glibc side, there are facilities to, if you really, really need to, you can create a new entry point and use... Help me, Mark. Symbol versions.
Symbol versions, exactly. Thank you. The old binary will still run and link at load time with the original deprecated version, but if you try to compile again the program, it will be using the newer ABI entry points, so it's possible to deprecate things on the
runtime side. On the compiler side, I'm not sure what the answer is. I kind of feel like if it's there, it's there forever. Ask Serge, I'm sure he may have found some compiler flags which are
very, very old versions of the compilers. And the compilers are just ignoring them, just so that the build scripts don't fail. Yeah, GCC does that. GCC does that, but that's not specific to compilers. And it's not specific to compilers. I mean, LS has obsolete flags and they are still there and just an op, so that's common practice.
Although software was named soft, as opposed to hardware. Once in there, it becomes hard stuff. Yeah, that's the command line aspect. On the coding side, on the language support side,
is there any... When we will get rid of C89 or before? Yeah, in 40 years. I think one thing is you have to differentiate between the different cases of compatibility. Like for source language, in a way, it's easier to just say,
we will no longer accept KNR declarations or whatever. Those are just errors now. I mean, I don't know if they are. I'm just saying it hypothetically. And then people who want to have a 30-year-old C compiler should go find one. What's hard is ABI compatibility,
especially if... What happens pretty commonly is someone implements the ABI and they think they did a good job, but they made a mistake and it's not found out till later. And then, if you think about it, if you just have one tool chain, like GCC, made a mistake, well, they think it's better to just leave the mistake.
Yeah, now it's the standard, even though it's ridiculous. But it's more difficult when there's two compilers involved because you can have this thing where you have parallel implementations and one or the other makes a mistake, or they disagree about what's a mistake,
or it's undocumented, and so they both made a choice. And even those cases can be treated differently. You could say, well, as communities, we'll have a commitment to following the standard. So if you catch us in a mistake, we'll make our users suffer a little and change. But that has to be like a two-way street. Everyone has to agree to that as a
social thing. And then for the case of implementing a new feature, like it's not documented and you want to make some choice, you have to also make it like it's a social problem. You have to say, we'll commit to sending a note to some ABI list to say, hey,
this is what we're planning to do. Stop us before we strike or whatever. Sure. Yeah. A question, I guess, to Peter, to reinterpret your question, were you asking more about how could we possibly ship experimental features with thought of
potentially deprecating it in the future? Yeah. So what I'm thinking less about, say, thinking about implementing things that are already standardized, like things in the front end, I'm thinking about, say, I've got some new binary security features. I'm just going to pull stack protection out of the hat here, and it will work in a certain way in a compiler.
That's a downstream binary thing. The compiler can choose how it implements it. But it has, as Christophe mentioned, some form of ABI. Then when, say, Clang comes up to that, does it need to match precisely? It might not be possible to match precisely. At the moment, I'm talking about for things like that that are kind of not in the area
of anything that's really written down, how do you get it so that, say, for example, you can maybe transition to a form that both can accept, and that might involve changing. And I think partly it's also going to be driven by the community. I think when there's demand from users that Clang and GCC work together, they will do it. When there's
not demand in areas that nobody cares about, it's not going to happen. So I think it would be to the community to decide what happens. So, yeah, one of the reasons why I remarked writing, trying to produce some documentation, if you realize you're introducing some new binary interface, even if it's a small amount, was just to try to make it a little bit easier for
other projects to start using that interface. So it would indeed be really nice if we could come up with introducing binary interface without, from the start, it being forever, making it more evolvable. I don't think there's an easy way to solve that problem.
The only partial solution, I think, maybe for some binary interface might help a little bit is just try to make sure there's always some kind of versioning there. So if you need to change, at least it's very easily detectable. Something like that might be one step in the right direction. But then it would need general agreement. Oh, yes, every time we recognize
we're introducing binary interface, we'll make sure there's some metadata somewhere easily interpretable because of versions of that. Thank you. At this point, we're quite a long way through our discussion. A lot of issues raised here. I want to turn to solutions now. We've had one suggestion, which is for having a neutral mailing list so you don't have to,
that is neither GCC nor LLVM for discussing neutrally. That's a good one, I think, to take away and see what is the organization that can host such a list that will be trusted and respected. We can take that one away. I'd like to now open up to the panel for solutions.
We've got one LLVM director here. I don't think we've actually got her GCC steering committee member, have we? No, no, no. But start with you, Pedro. Solutions. Oh, man. Start from scratch.
The third compiler. The solution is standard, right? I don't know solutions. You were mentioning ABI versioning, and it reminded me of work that Red Hat is doing, and maybe I'll ask Mark Wheeler to help me with this.
The next project, what's the name of that? The binary tagging thing. Not that. No, that's Doge's. Nick Lipton's. Anovin, exactly. Can you talk a little bit about that?
See what I've done there? Mark. Tell us the solution. No, it's not a solution, but what Anovin does is record all the compiler flags and all the ABIs it can recognize in object files. So it's not a solution,
but you're right. I hadn't thought about that. There is, I don't know how far
Nick is with the Anovin plugin for Clang, but he was working on that. Anovin is a plugin currently for GC where it will output ELF nodes for all the object files
in which it records all the compiler flags and ABIs it's currently respecting. And so when you link your object files together, your binaries have all the flags
ever used in all the compiler versions. And he also is working on that for Clang. I don't know how far he is. It's at least an interesting thing to do.
Fedora does it now digital-wide so that you can see what is actually being used in a whole distribution, which is nice. I don't know whether that work progressed to a point where the linker can reject
incompatible ABIs, but it could be done in that. It could be used to, you can use it to query the whole distribution, see what ABIs you're using. And I think that for versioning, we could use it to validate whether the ABIs are compatible,
which version are you using. I feel like that could help. So if we want better collaboration, not to be just wishful thinking,
because I think everyone is in agreement that it's a good thing to do. If we want to be pragmatic, it only happens on real projects. And I think both our communities are already collaborating on a need-be basis. I know, for example, at ARM, the LLVM and the GCC people are working in the same building.
It was not the case for some time, but now they are all together. And we try to make sure, because ARM has an interest, that it is a proper support for their product. And there are other cases where I think there is collaboration.
So are there other projects where we could have a broader collaboration? You know, when I listen to this, I think in some cases, an RFC process,
something like that, a shared way to communicate directly between compiler developers is good. For backends, like for ABIs, I think there are already existing institutions that handle many of these things. There's the C committee and the C++ committee.
There's DWARF. Some of those, like I know more about DWARF, which I think is institutionally kind of weak, that need to be strengthened and supported by these communities. But some of these areas, like I think the ABI situation is similar.
It needs a little more commitment from developers and stuff. And then some of the areas are terra incognita, right? Like linkers are not documented at all, as far as I can tell, and just work by magic. And like that would be a good thing to change and maybe create a new institution, you know, to handle that, right? So I don't know.
Yeah, we're running just a bit short of time. So I'm going to actually draw things to a conclusion now. We've heard issues where we could do better. We've heard a willingness to do better.
We've heard some concrete suggestions, mailing lists, projects that we work together, some detailed technology from Mark and Pedro, the possibility of needing new institutions. There's one other I think we've missed, which we did once. We had an LLVM cauldron the day before a GNU tools cauldron, and 40% of the people who went to one went to the other.
And that was a very good thing, particularly the evening in between, when we all came to the same reception and all drank and ate together. So my two pena theia, I think possibly we need to revive that idea. So actually we do talk to each other, because actually sometimes human interaction only matters. It does matter.
I'd like to thank you all for your time. I'd like to thank our panelists for giving up their time. Arno in particular, who came up here personally just for this session. So I really do appreciate you traveling up here. Please carry on the discussion, send your feedback. If you can't find anyone else to tell, send me an email.
Thank you very much.